Sep 8 23:52:02.025069 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 8 23:52:02.025093 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:02.025105 kernel: BIOS-provided physical RAM map: Sep 8 23:52:02.025112 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 8 23:52:02.025119 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 8 23:52:02.025126 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 8 23:52:02.025134 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 8 23:52:02.025141 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 8 23:52:02.025148 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 8 23:52:02.025158 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 8 23:52:02.025165 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:52:02.025172 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 8 23:52:02.025181 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:52:02.025189 kernel: NX (Execute Disable) protection: active Sep 8 23:52:02.025197 kernel: APIC: Static calls initialized Sep 8 23:52:02.025209 kernel: SMBIOS 2.8 present. Sep 8 23:52:02.025217 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 8 23:52:02.025224 kernel: Hypervisor detected: KVM Sep 8 23:52:02.025232 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 8 23:52:02.025239 kernel: kvm-clock: using sched offset of 4208528884 cycles Sep 8 23:52:02.025248 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 8 23:52:02.025256 kernel: tsc: Detected 2794.748 MHz processor Sep 8 23:52:02.025264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 8 23:52:02.025272 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 8 23:52:02.025279 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 8 23:52:02.025290 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 8 23:52:02.025298 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 8 23:52:02.025305 kernel: Using GB pages for direct mapping Sep 8 23:52:02.025313 kernel: ACPI: Early table checksum verification disabled Sep 8 23:52:02.025320 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 8 23:52:02.025328 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025336 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025343 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025353 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 8 23:52:02.025362 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025369 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025377 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025385 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:52:02.025392 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 8 23:52:02.025400 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 8 23:52:02.025411 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 8 23:52:02.025421 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 8 23:52:02.025429 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 8 23:52:02.025437 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 8 23:52:02.025445 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 8 23:52:02.025455 kernel: No NUMA configuration found Sep 8 23:52:02.025462 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 8 23:52:02.025473 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 8 23:52:02.025481 kernel: Zone ranges: Sep 8 23:52:02.025488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 8 23:52:02.025496 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 8 23:52:02.025504 kernel: Normal empty Sep 8 23:52:02.025512 kernel: Movable zone start for each node Sep 8 23:52:02.025519 kernel: Early memory node ranges Sep 8 23:52:02.025527 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 8 23:52:02.025535 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 8 23:52:02.025543 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 8 23:52:02.025553 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:52:02.025563 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 8 23:52:02.025570 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 8 23:52:02.025578 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 8 23:52:02.025586 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 8 23:52:02.025594 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 8 23:52:02.025602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 8 23:52:02.025623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 8 23:52:02.025631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 8 23:52:02.025642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 8 23:52:02.025650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 8 23:52:02.025658 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 8 23:52:02.025665 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 8 23:52:02.025673 kernel: TSC deadline timer available Sep 8 23:52:02.025681 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 8 23:52:02.025688 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 8 23:52:02.025696 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 8 23:52:02.025706 kernel: kvm-guest: setup PV sched yield Sep 8 23:52:02.025716 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 8 23:52:02.025724 kernel: Booting paravirtualized kernel on KVM Sep 8 23:52:02.025732 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 8 23:52:02.025740 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 8 23:52:02.025748 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 8 23:52:02.025756 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 8 23:52:02.025763 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 8 23:52:02.025771 kernel: kvm-guest: PV spinlocks enabled Sep 8 23:52:02.025779 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 8 23:52:02.025790 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:02.025799 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:52:02.025806 kernel: random: crng init done Sep 8 23:52:02.025814 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:52:02.025822 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:52:02.025830 kernel: Fallback order for Node 0: 0 Sep 8 23:52:02.025837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 8 23:52:02.025845 kernel: Policy zone: DMA32 Sep 8 23:52:02.025855 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:52:02.025864 kernel: Memory: 2432548K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 138944K reserved, 0K cma-reserved) Sep 8 23:52:02.025872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:52:02.025879 kernel: ftrace: allocating 37943 entries in 149 pages Sep 8 23:52:02.025888 kernel: ftrace: allocated 149 pages with 4 groups Sep 8 23:52:02.025895 kernel: Dynamic Preempt: voluntary Sep 8 23:52:02.025903 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:52:02.025912 kernel: rcu: RCU event tracing is enabled. Sep 8 23:52:02.025920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:52:02.025930 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:52:02.025938 kernel: Rude variant of Tasks RCU enabled. Sep 8 23:52:02.025946 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:52:02.025954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:52:02.025964 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:52:02.025972 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 8 23:52:02.025980 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:52:02.025988 kernel: Console: colour VGA+ 80x25 Sep 8 23:52:02.025995 kernel: printk: console [ttyS0] enabled Sep 8 23:52:02.026005 kernel: ACPI: Core revision 20230628 Sep 8 23:52:02.026013 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 8 23:52:02.026021 kernel: APIC: Switch to symmetric I/O mode setup Sep 8 23:52:02.026029 kernel: x2apic enabled Sep 8 23:52:02.026037 kernel: APIC: Switched APIC routing to: physical x2apic Sep 8 23:52:02.026044 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 8 23:52:02.026059 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 8 23:52:02.026067 kernel: kvm-guest: setup PV IPIs Sep 8 23:52:02.026085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 8 23:52:02.026094 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 8 23:52:02.026102 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 8 23:52:02.026110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 8 23:52:02.026121 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 8 23:52:02.026129 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 8 23:52:02.026137 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 8 23:52:02.026145 kernel: Spectre V2 : Mitigation: Retpolines Sep 8 23:52:02.026153 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 8 23:52:02.026168 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 8 23:52:02.026180 kernel: active return thunk: retbleed_return_thunk Sep 8 23:52:02.026196 kernel: RETBleed: Mitigation: untrained return thunk Sep 8 23:52:02.026217 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 8 23:52:02.026232 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 8 23:52:02.026251 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 8 23:52:02.026270 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 8 23:52:02.026289 kernel: active return thunk: srso_return_thunk Sep 8 23:52:02.026315 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 8 23:52:02.026335 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 8 23:52:02.026350 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 8 23:52:02.026358 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 8 23:52:02.026366 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 8 23:52:02.026374 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 8 23:52:02.026382 kernel: Freeing SMP alternatives memory: 32K Sep 8 23:52:02.026390 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:52:02.026399 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:52:02.026418 kernel: landlock: Up and running. Sep 8 23:52:02.026427 kernel: SELinux: Initializing. Sep 8 23:52:02.026435 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:52:02.026443 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:52:02.026451 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 8 23:52:02.026460 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:02.026468 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:02.026476 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:52:02.026487 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 8 23:52:02.026499 kernel: ... version: 0 Sep 8 23:52:02.026507 kernel: ... bit width: 48 Sep 8 23:52:02.026515 kernel: ... generic registers: 6 Sep 8 23:52:02.026523 kernel: ... value mask: 0000ffffffffffff Sep 8 23:52:02.026531 kernel: ... max period: 00007fffffffffff Sep 8 23:52:02.026539 kernel: ... fixed-purpose events: 0 Sep 8 23:52:02.026547 kernel: ... event mask: 000000000000003f Sep 8 23:52:02.026555 kernel: signal: max sigframe size: 1776 Sep 8 23:52:02.026563 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:52:02.026574 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:52:02.026582 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:52:02.026590 kernel: smpboot: x86: Booting SMP configuration: Sep 8 23:52:02.026598 kernel: .... node #0, CPUs: #1 #2 #3 Sep 8 23:52:02.026628 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:52:02.026637 kernel: smpboot: Max logical packages: 1 Sep 8 23:52:02.026645 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 8 23:52:02.026653 kernel: devtmpfs: initialized Sep 8 23:52:02.026661 kernel: x86/mm: Memory block size: 128MB Sep 8 23:52:02.026672 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:52:02.026680 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:52:02.026688 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:52:02.026696 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:52:02.026704 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:52:02.026713 kernel: audit: type=2000 audit(1757375520.571:1): state=initialized audit_enabled=0 res=1 Sep 8 23:52:02.026721 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:52:02.026729 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 8 23:52:02.026737 kernel: cpuidle: using governor menu Sep 8 23:52:02.026747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:52:02.026755 kernel: dca service started, version 1.12.1 Sep 8 23:52:02.026774 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 8 23:52:02.026782 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 8 23:52:02.026790 kernel: PCI: Using configuration type 1 for base access Sep 8 23:52:02.026799 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 8 23:52:02.026807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:52:02.026819 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:52:02.026827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:52:02.026838 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:52:02.026846 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:52:02.026854 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:52:02.026862 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:52:02.026870 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:52:02.026878 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 8 23:52:02.026886 kernel: ACPI: Interpreter enabled Sep 8 23:52:02.026894 kernel: ACPI: PM: (supports S0 S3 S5) Sep 8 23:52:02.026902 kernel: ACPI: Using IOAPIC for interrupt routing Sep 8 23:52:02.026926 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 8 23:52:02.026942 kernel: PCI: Using E820 reservations for host bridge windows Sep 8 23:52:02.026953 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 8 23:52:02.026961 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:52:02.027290 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:52:02.027436 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 8 23:52:02.027572 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 8 23:52:02.027583 kernel: PCI host bridge to bus 0000:00 Sep 8 23:52:02.027785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 8 23:52:02.027912 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 8 23:52:02.028034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 8 23:52:02.028169 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 8 23:52:02.028356 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 8 23:52:02.028529 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 8 23:52:02.028681 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:52:02.028853 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 8 23:52:02.029013 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 8 23:52:02.029160 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 8 23:52:02.029294 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 8 23:52:02.029483 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 8 23:52:02.029684 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 8 23:52:02.029931 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:52:02.030095 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 8 23:52:02.030234 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 8 23:52:02.030378 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 8 23:52:02.030539 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 8 23:52:02.030697 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 8 23:52:02.030834 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 8 23:52:02.030977 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 8 23:52:02.031872 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 8 23:52:02.032024 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 8 23:52:02.032172 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 8 23:52:02.032309 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 8 23:52:02.032445 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 8 23:52:02.032599 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 8 23:52:02.032759 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 8 23:52:02.032914 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 8 23:52:02.033066 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 8 23:52:02.033204 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 8 23:52:02.033364 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 8 23:52:02.033503 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 8 23:52:02.033520 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 8 23:52:02.033533 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 8 23:52:02.033541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 8 23:52:02.033549 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 8 23:52:02.033557 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 8 23:52:02.033566 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 8 23:52:02.033574 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 8 23:52:02.033582 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 8 23:52:02.033593 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 8 23:52:02.033622 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 8 23:52:02.033631 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 8 23:52:02.033640 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 8 23:52:02.033648 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 8 23:52:02.033656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 8 23:52:02.033664 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 8 23:52:02.033673 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 8 23:52:02.033681 kernel: iommu: Default domain type: Translated Sep 8 23:52:02.033689 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 8 23:52:02.033701 kernel: PCI: Using ACPI for IRQ routing Sep 8 23:52:02.033709 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 8 23:52:02.033717 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 8 23:52:02.033725 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 8 23:52:02.033865 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 8 23:52:02.034016 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 8 23:52:02.034163 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 8 23:52:02.034175 kernel: vgaarb: loaded Sep 8 23:52:02.034189 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 8 23:52:02.034198 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 8 23:52:02.034206 kernel: clocksource: Switched to clocksource kvm-clock Sep 8 23:52:02.034215 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:52:02.034223 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:52:02.034231 kernel: pnp: PnP ACPI init Sep 8 23:52:02.034422 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 8 23:52:02.034443 kernel: pnp: PnP ACPI: found 6 devices Sep 8 23:52:02.034457 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 8 23:52:02.034465 kernel: NET: Registered PF_INET protocol family Sep 8 23:52:02.034473 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:52:02.034482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:52:02.034490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:52:02.034498 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:52:02.034506 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:52:02.034515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:52:02.034523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:52:02.034534 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:52:02.034542 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:52:02.034551 kernel: NET: Registered PF_XDP protocol family Sep 8 23:52:02.034730 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 8 23:52:02.034856 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 8 23:52:02.034979 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 8 23:52:02.035115 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 8 23:52:02.035239 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 8 23:52:02.035364 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 8 23:52:02.035380 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:52:02.035389 kernel: Initialise system trusted keyrings Sep 8 23:52:02.035397 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:52:02.035405 kernel: Key type asymmetric registered Sep 8 23:52:02.035414 kernel: Asymmetric key parser 'x509' registered Sep 8 23:52:02.035422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 8 23:52:02.035430 kernel: io scheduler mq-deadline registered Sep 8 23:52:02.035439 kernel: io scheduler kyber registered Sep 8 23:52:02.035447 kernel: io scheduler bfq registered Sep 8 23:52:02.035458 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 8 23:52:02.035467 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 8 23:52:02.035475 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 8 23:52:02.035483 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 8 23:52:02.035492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:52:02.035500 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 8 23:52:02.035508 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 8 23:52:02.035516 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 8 23:52:02.035525 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 8 23:52:02.035536 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 8 23:52:02.036243 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 8 23:52:02.036400 kernel: rtc_cmos 00:04: registered as rtc0 Sep 8 23:52:02.036530 kernel: rtc_cmos 00:04: setting system clock to 2025-09-08T23:52:01 UTC (1757375521) Sep 8 23:52:02.036705 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 8 23:52:02.036718 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 8 23:52:02.036727 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:52:02.036740 kernel: Segment Routing with IPv6 Sep 8 23:52:02.036749 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:52:02.036757 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:52:02.036768 kernel: Key type dns_resolver registered Sep 8 23:52:02.036776 kernel: IPI shorthand broadcast: enabled Sep 8 23:52:02.036785 kernel: sched_clock: Marking stable (894002857, 120269821)->(1030379128, -16106450) Sep 8 23:52:02.036793 kernel: registered taskstats version 1 Sep 8 23:52:02.036801 kernel: Loading compiled-in X.509 certificates Sep 8 23:52:02.036810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 8 23:52:02.036818 kernel: Key type .fscrypt registered Sep 8 23:52:02.036829 kernel: Key type fscrypt-provisioning registered Sep 8 23:52:02.036838 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:52:02.036846 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:52:02.036854 kernel: ima: No architecture policies found Sep 8 23:52:02.036862 kernel: clk: Disabling unused clocks Sep 8 23:52:02.036871 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 8 23:52:02.036879 kernel: Write protecting the kernel read-only data: 38912k Sep 8 23:52:02.036887 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 8 23:52:02.036898 kernel: Run /init as init process Sep 8 23:52:02.036906 kernel: with arguments: Sep 8 23:52:02.036914 kernel: /init Sep 8 23:52:02.036923 kernel: with environment: Sep 8 23:52:02.036931 kernel: HOME=/ Sep 8 23:52:02.036939 kernel: TERM=linux Sep 8 23:52:02.036947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:52:02.036956 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:52:02.036968 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:52:02.036981 systemd[1]: Detected virtualization kvm. Sep 8 23:52:02.036989 systemd[1]: Detected architecture x86-64. Sep 8 23:52:02.036998 systemd[1]: Running in initrd. Sep 8 23:52:02.037006 systemd[1]: No hostname configured, using default hostname. Sep 8 23:52:02.037015 systemd[1]: Hostname set to . Sep 8 23:52:02.037024 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:52:02.037033 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:52:02.037044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:02.037061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:02.037084 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:52:02.037096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:52:02.037105 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:52:02.037118 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:52:02.037128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:52:02.037137 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:52:02.037146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:02.037155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:02.037164 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:52:02.037173 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:52:02.037183 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:52:02.037194 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:52:02.037203 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:52:02.037212 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:52:02.037221 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:52:02.037230 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:52:02.037240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:02.037249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:02.037258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:02.037267 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:52:02.037278 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:52:02.037288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:52:02.037296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:52:02.037305 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:52:02.037314 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:52:02.037323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:52:02.037340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:02.037354 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:52:02.037367 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:02.037377 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:52:02.037386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:52:02.037428 systemd-journald[194]: Collecting audit messages is disabled. Sep 8 23:52:02.037450 systemd-journald[194]: Journal started Sep 8 23:52:02.037473 systemd-journald[194]: Runtime Journal (/run/log/journal/c490c864c316455eb4f6e73cbbe9d4f9) is 6M, max 48.4M, 42.3M free. Sep 8 23:52:02.026991 systemd-modules-load[195]: Inserted module 'overlay' Sep 8 23:52:02.064412 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:52:02.064443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:52:02.064469 kernel: Bridge firewalling registered Sep 8 23:52:02.062468 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 8 23:52:02.066532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:02.069046 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:02.071544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:52:02.086781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:02.090120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:52:02.092913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:52:02.096131 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:52:02.126528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:02.129142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:02.131899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:02.134748 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:02.146773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:52:02.156784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:52:02.191898 dracut-cmdline[231]: dracut-dracut-053 Sep 8 23:52:02.196510 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:52:02.220459 systemd-resolved[233]: Positive Trust Anchors: Sep 8 23:52:02.220477 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:52:02.220517 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:52:02.223931 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 8 23:52:02.225439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:52:02.231657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:02.300651 kernel: SCSI subsystem initialized Sep 8 23:52:02.310634 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:52:02.321638 kernel: iscsi: registered transport (tcp) Sep 8 23:52:02.353660 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:52:02.353740 kernel: QLogic iSCSI HBA Driver Sep 8 23:52:02.412972 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:52:02.424795 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:52:02.453714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:52:02.453807 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:52:02.453832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:52:02.497647 kernel: raid6: avx2x4 gen() 28882 MB/s Sep 8 23:52:02.514644 kernel: raid6: avx2x2 gen() 29825 MB/s Sep 8 23:52:02.531710 kernel: raid6: avx2x1 gen() 25116 MB/s Sep 8 23:52:02.531773 kernel: raid6: using algorithm avx2x2 gen() 29825 MB/s Sep 8 23:52:02.549795 kernel: raid6: .... xor() 19107 MB/s, rmw enabled Sep 8 23:52:02.549891 kernel: raid6: using avx2x2 recovery algorithm Sep 8 23:52:02.571962 kernel: xor: automatically using best checksumming function avx Sep 8 23:52:02.732649 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:52:02.747356 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:52:02.756842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:02.775859 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 8 23:52:02.783533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:02.799158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:52:02.817661 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Sep 8 23:52:02.861099 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:52:02.874872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:52:02.965285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:02.975893 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:52:02.991310 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:52:02.993396 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:52:02.995062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:03.001720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:52:03.013880 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:52:03.031064 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:52:03.035637 kernel: cryptd: max_cpu_qlen set to 1000 Sep 8 23:52:03.040700 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 8 23:52:03.044092 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:52:03.048811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:52:03.048841 kernel: GPT:9289727 != 19775487 Sep 8 23:52:03.048863 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:52:03.050110 kernel: GPT:9289727 != 19775487 Sep 8 23:52:03.050138 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:52:03.051633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:03.064162 kernel: AVX2 version of gcm_enc/dec engaged. Sep 8 23:52:03.064232 kernel: AES CTR mode by8 optimization enabled Sep 8 23:52:03.071500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:52:03.071698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:03.076602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:03.078718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:03.079195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:03.087404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:03.099650 kernel: libata version 3.00 loaded. Sep 8 23:52:03.096042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:03.098148 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:03.110715 kernel: ahci 0000:00:1f.2: version 3.0 Sep 8 23:52:03.122102 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 8 23:52:03.128638 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (477) Sep 8 23:52:03.130794 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 8 23:52:03.131071 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 8 23:52:03.134629 kernel: scsi host0: ahci Sep 8 23:52:03.134903 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (469) Sep 8 23:52:03.138629 kernel: scsi host1: ahci Sep 8 23:52:03.140632 kernel: scsi host2: ahci Sep 8 23:52:03.145642 kernel: scsi host3: ahci Sep 8 23:52:03.149645 kernel: scsi host4: ahci Sep 8 23:52:03.157867 kernel: scsi host5: ahci Sep 8 23:52:03.158232 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 8 23:52:03.158254 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 8 23:52:03.158270 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 8 23:52:03.158285 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 8 23:52:03.158308 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 8 23:52:03.158325 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 8 23:52:03.169410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:52:03.194672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:52:03.196200 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:52:03.199503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:03.214805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:52:03.230203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:52:03.243980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:52:03.247578 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:52:03.281702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:03.457041 disk-uuid[568]: Primary Header is updated. Sep 8 23:52:03.457041 disk-uuid[568]: Secondary Entries is updated. Sep 8 23:52:03.457041 disk-uuid[568]: Secondary Header is updated. Sep 8 23:52:03.469787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:03.469837 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:03.470633 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 8 23:52:03.474483 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:03.474515 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 8 23:52:03.474530 kernel: ata3.00: applying bridge limits Sep 8 23:52:03.474545 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:03.474559 kernel: ata3.00: configured for UDMA/100 Sep 8 23:52:03.476693 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:03.478171 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 8 23:52:03.478292 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 8 23:52:03.533614 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 8 23:52:03.534323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 8 23:52:03.553696 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 8 23:52:04.507677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:52:04.508597 disk-uuid[578]: The operation has completed successfully. Sep 8 23:52:04.555143 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:52:04.555344 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:52:04.612023 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:52:04.616818 sh[593]: Success Sep 8 23:52:04.647648 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 8 23:52:04.702840 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:52:04.711844 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:52:04.719279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:52:04.730241 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 8 23:52:04.730311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:04.730342 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:52:04.731261 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:52:04.732131 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:52:04.741105 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:52:04.742884 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:52:04.754823 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:52:04.757059 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:52:04.781955 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:04.782050 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:04.782067 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:04.787844 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:04.793643 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:04.801998 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:52:04.811805 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:52:05.071017 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:52:05.085318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:52:05.091013 ignition[682]: Ignition 2.20.0 Sep 8 23:52:05.091032 ignition[682]: Stage: fetch-offline Sep 8 23:52:05.091152 ignition[682]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:05.091193 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:05.091390 ignition[682]: parsed url from cmdline: "" Sep 8 23:52:05.091396 ignition[682]: no config URL provided Sep 8 23:52:05.091404 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:52:05.091419 ignition[682]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:52:05.091517 ignition[682]: op(1): [started] loading QEMU firmware config module Sep 8 23:52:05.091526 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:52:05.103066 ignition[682]: op(1): [finished] loading QEMU firmware config module Sep 8 23:52:05.149903 ignition[682]: parsing config with SHA512: 416140fb0468e8d4988ccd23d16da653c689ecbb781ac8347d1605538158bb969462d6a0714012377fc596bbc1fc13732698a999743093c9085618603585f58d Sep 8 23:52:05.157450 systemd-networkd[777]: lo: Link UP Sep 8 23:52:05.157464 systemd-networkd[777]: lo: Gained carrier Sep 8 23:52:05.157756 unknown[682]: fetched base config from "system" Sep 8 23:52:05.159677 ignition[682]: fetch-offline: fetch-offline passed Sep 8 23:52:05.157766 unknown[682]: fetched user config from "qemu" Sep 8 23:52:05.159982 ignition[682]: Ignition finished successfully Sep 8 23:52:05.160990 systemd-networkd[777]: Enumeration completed Sep 8 23:52:05.161677 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:05.161683 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:52:05.162681 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:52:05.164572 systemd-networkd[777]: eth0: Link UP Sep 8 23:52:05.164578 systemd-networkd[777]: eth0: Gained carrier Sep 8 23:52:05.164588 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:05.164715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:52:05.168181 systemd[1]: Reached target network.target - Network. Sep 8 23:52:05.169676 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:52:05.179908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:52:05.182731 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:52:05.275841 ignition[783]: Ignition 2.20.0 Sep 8 23:52:05.275855 ignition[783]: Stage: kargs Sep 8 23:52:05.276099 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:05.276115 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:05.277263 ignition[783]: kargs: kargs passed Sep 8 23:52:05.277326 ignition[783]: Ignition finished successfully Sep 8 23:52:05.281918 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:52:05.295930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:52:05.394211 ignition[793]: Ignition 2.20.0 Sep 8 23:52:05.394227 ignition[793]: Stage: disks Sep 8 23:52:05.394460 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:05.394477 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:05.399326 ignition[793]: disks: disks passed Sep 8 23:52:05.399415 ignition[793]: Ignition finished successfully Sep 8 23:52:05.403071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:52:05.406203 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:52:05.408917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:52:05.409134 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:52:05.414164 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:52:05.414258 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:52:05.428892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:52:05.490935 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:52:05.500905 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:52:05.508977 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:52:05.656666 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 8 23:52:05.657757 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:52:05.660234 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:52:05.682856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:52:05.686585 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:52:05.689962 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:52:05.690060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:52:05.700766 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (812) Sep 8 23:52:05.700804 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:05.700822 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:05.700837 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:05.700853 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:05.692425 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:52:05.704835 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:52:05.706808 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:52:05.726822 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:52:05.762735 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:52:05.769044 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:52:05.775457 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:52:05.780005 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:52:05.892951 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:52:05.902887 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:52:05.906919 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:52:05.912117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:52:05.913522 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:06.050908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:52:06.056668 ignition[926]: INFO : Ignition 2.20.0 Sep 8 23:52:06.056668 ignition[926]: INFO : Stage: mount Sep 8 23:52:06.058871 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:06.058871 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:06.058871 ignition[926]: INFO : mount: mount passed Sep 8 23:52:06.058871 ignition[926]: INFO : Ignition finished successfully Sep 8 23:52:06.064128 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:52:06.075844 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:52:06.087073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:52:06.103664 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (939) Sep 8 23:52:06.106199 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:52:06.106225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:52:06.106237 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:52:06.110664 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:52:06.112124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:52:06.257674 ignition[956]: INFO : Ignition 2.20.0 Sep 8 23:52:06.257674 ignition[956]: INFO : Stage: files Sep 8 23:52:06.259748 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:06.259748 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:06.259748 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:52:06.263672 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:52:06.263672 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:52:06.266863 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:52:06.268309 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:52:06.270195 unknown[956]: wrote ssh authorized keys file for user: core Sep 8 23:52:06.271414 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:52:06.273766 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 8 23:52:06.275662 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 8 23:52:06.448517 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:52:06.470813 systemd-networkd[777]: eth0: Gained IPv6LL Sep 8 23:52:07.056138 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 8 23:52:07.056138 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:52:07.060158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 8 23:52:07.273034 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:52:07.550009 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:52:07.550009 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:52:07.553579 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:52:07.555435 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:52:07.557166 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:52:07.558813 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:52:07.560559 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:52:07.562254 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:52:07.564002 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:52:07.565967 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:52:07.567808 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:52:07.569831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:52:07.572722 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:52:07.575442 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:52:07.577825 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 8 23:52:07.979172 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:52:09.010937 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:52:09.010937 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:52:09.016704 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:52:09.047672 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:52:09.056677 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:52:09.059013 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:52:09.059013 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:52:09.059013 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:52:09.059013 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:52:09.059013 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:52:09.059013 ignition[956]: INFO : files: files passed Sep 8 23:52:09.059013 ignition[956]: INFO : Ignition finished successfully Sep 8 23:52:09.060562 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:52:09.073278 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:52:09.076866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:52:09.080446 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:52:09.080643 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:52:09.143189 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:52:09.148082 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:09.148082 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:09.152011 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:52:09.153904 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:52:09.155772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:52:09.165993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:52:09.205725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:52:09.205937 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:52:09.210524 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:52:09.212430 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:52:09.215131 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:52:09.217417 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:52:09.241376 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:52:09.250040 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:52:09.260943 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:09.263334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:09.266112 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:52:09.267345 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:52:09.270826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:52:09.273768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:52:09.276108 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:52:09.278193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:52:09.280669 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:52:09.283277 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:52:09.285792 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:52:09.288143 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:52:09.290972 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:52:09.293331 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:52:09.295668 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:52:09.297508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:52:09.298751 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:52:09.301442 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:09.304354 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:09.307117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:52:09.308242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:09.312223 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:52:09.312419 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:52:09.316456 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:52:09.317744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:52:09.320553 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:52:09.322378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:52:09.324720 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:09.325675 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:52:09.326192 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:52:09.326582 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:52:09.326755 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:52:09.327181 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:52:09.327300 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:52:09.334543 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:52:09.334763 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:52:09.336596 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:52:09.336792 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:52:09.348936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:52:09.352079 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:52:09.352192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:52:09.352323 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:09.355492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:52:09.355746 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:52:09.364750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:52:09.364908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:52:09.380472 ignition[1010]: INFO : Ignition 2.20.0 Sep 8 23:52:09.380472 ignition[1010]: INFO : Stage: umount Sep 8 23:52:09.382793 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:52:09.382793 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:52:09.382793 ignition[1010]: INFO : umount: umount passed Sep 8 23:52:09.382793 ignition[1010]: INFO : Ignition finished successfully Sep 8 23:52:09.389872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:52:09.391516 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:52:09.392582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:52:09.396622 systemd[1]: Stopped target network.target - Network. Sep 8 23:52:09.398479 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:52:09.399735 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:52:09.402792 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:52:09.402913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:52:09.405163 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:52:09.405228 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:52:09.407257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:52:09.407313 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:52:09.412867 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:52:09.415416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:52:09.422900 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:52:09.424096 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:52:09.429319 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:52:09.429760 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:52:09.429944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:52:09.434445 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:52:09.435404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:52:09.435479 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:09.445774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:52:09.447055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:52:09.447167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:52:09.450079 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:52:09.450164 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:09.451859 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:52:09.451930 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:09.453873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:52:09.453951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:09.456648 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:09.460313 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:52:09.460427 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:09.470942 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:52:09.471216 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:09.473631 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:52:09.473795 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:52:09.476327 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:52:09.476441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:09.477696 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:52:09.477756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:09.479625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:52:09.479706 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:52:09.482073 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:52:09.482142 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:52:09.483967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:52:09.484042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:52:09.495848 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:52:09.497009 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:52:09.497090 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:09.499444 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 8 23:52:09.499501 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:52:09.501787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:52:09.501854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:09.504722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:52:09.504800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:09.508403 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:52:09.508500 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:52:09.509137 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:52:09.509294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:52:09.656922 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:52:09.657091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:52:09.660782 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:52:09.662660 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:52:09.662779 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:52:09.683025 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:52:09.692683 systemd[1]: Switching root. Sep 8 23:52:09.724920 systemd-journald[194]: Journal stopped Sep 8 23:52:11.828552 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 8 23:52:11.828727 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:52:11.828751 kernel: SELinux: policy capability open_perms=1 Sep 8 23:52:11.828777 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:52:11.828789 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:52:11.828801 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:52:11.828814 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:52:11.828835 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:52:11.828847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:52:11.828859 kernel: audit: type=1403 audit(1757375530.318:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:52:11.828877 systemd[1]: Successfully loaded SELinux policy in 49.929ms. Sep 8 23:52:11.828899 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.816ms. Sep 8 23:52:11.828915 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:52:11.828928 systemd[1]: Detected virtualization kvm. Sep 8 23:52:11.828941 systemd[1]: Detected architecture x86-64. Sep 8 23:52:11.828953 systemd[1]: Detected first boot. Sep 8 23:52:11.828972 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:52:11.828985 zram_generator::config[1057]: No configuration found. Sep 8 23:52:11.828999 kernel: Guest personality initialized and is inactive Sep 8 23:52:11.829011 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 8 23:52:11.829023 kernel: Initialized host personality Sep 8 23:52:11.829035 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:52:11.829048 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:52:11.829066 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:52:11.829086 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:52:11.829099 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:52:11.829112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:52:11.829125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:52:11.829138 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:52:11.829151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:52:11.829163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:52:11.829176 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:52:11.829189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:52:11.829211 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:52:11.829223 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:52:11.829242 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:52:11.829255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:52:11.829271 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:52:11.829286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:52:11.829300 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:52:11.829318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:52:11.829334 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 8 23:52:11.829347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:52:11.829360 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:52:11.829373 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:52:11.829385 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:52:11.829398 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:52:11.829417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:52:11.829432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:52:11.829447 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:52:11.829460 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:52:11.829472 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:52:11.829485 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:52:11.829497 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:52:11.829512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:52:11.829528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:52:11.829545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:52:11.829558 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:52:11.829574 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:52:11.829587 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:52:11.829600 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:52:11.829627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:11.829643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:52:11.829656 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:52:11.829669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:52:11.829682 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:52:11.829694 systemd[1]: Reached target machines.target - Containers. Sep 8 23:52:11.829710 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:52:11.829726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:11.829742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:52:11.829757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:52:11.829781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:11.829795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:52:11.829807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:11.829820 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:52:11.829858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:11.829876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:52:11.829888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:52:11.829901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:52:11.829914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:52:11.829926 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:52:11.829939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:11.829953 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:52:11.829965 kernel: fuse: init (API version 7.39) Sep 8 23:52:11.829980 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:52:11.829993 kernel: loop: module loaded Sep 8 23:52:11.830027 systemd-journald[1121]: Collecting audit messages is disabled. Sep 8 23:52:11.830051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:52:11.830064 systemd-journald[1121]: Journal started Sep 8 23:52:11.830088 systemd-journald[1121]: Runtime Journal (/run/log/journal/c490c864c316455eb4f6e73cbbe9d4f9) is 6M, max 48.4M, 42.3M free. Sep 8 23:52:11.830834 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:52:11.391088 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:52:11.408973 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:52:11.410222 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:52:11.836810 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:52:11.845636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:52:11.845716 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:52:11.847471 systemd[1]: Stopped verity-setup.service. Sep 8 23:52:11.853130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:11.853233 kernel: ACPI: bus type drm_connector registered Sep 8 23:52:11.859719 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:52:11.861322 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:52:11.863988 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:52:11.865378 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:52:11.866993 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:52:11.868678 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:52:11.870192 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:52:11.871848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:52:11.873813 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:52:11.874166 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:52:11.876008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:11.876402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:11.878240 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:52:11.878730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:52:11.880498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:11.880954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:11.882945 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:52:11.883278 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:52:11.885239 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:11.885603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:11.887501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:52:11.889453 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:52:11.891517 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:52:11.938762 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:52:11.945919 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:52:11.971908 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:52:11.975387 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:52:11.977009 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:52:11.977062 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:52:11.979743 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:52:11.982477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:52:11.984881 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:52:11.986180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:11.988187 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:52:11.990911 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:52:11.992216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:52:11.993363 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:52:11.994709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:52:11.997977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:52:12.001217 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:52:12.003965 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:52:12.008376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:52:12.010423 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:52:12.012162 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:52:12.021309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:52:12.063934 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 8 23:52:12.096347 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:52:12.108673 kernel: loop0: detected capacity change from 0 to 138176 Sep 8 23:52:12.110964 systemd-journald[1121]: Time spent on flushing to /var/log/journal/c490c864c316455eb4f6e73cbbe9d4f9 is 66.225ms for 976 entries. Sep 8 23:52:12.110964 systemd-journald[1121]: System Journal (/var/log/journal/c490c864c316455eb4f6e73cbbe9d4f9) is 8M, max 195.6M, 187.6M free. Sep 8 23:52:12.396144 systemd-journald[1121]: Received client request to flush runtime journal. Sep 8 23:52:12.396227 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:52:12.396262 kernel: loop1: detected capacity change from 0 to 147912 Sep 8 23:52:12.396290 kernel: loop2: detected capacity change from 0 to 224512 Sep 8 23:52:12.399363 kernel: loop3: detected capacity change from 0 to 138176 Sep 8 23:52:12.117366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:52:12.120497 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 8 23:52:12.120511 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 8 23:52:12.143987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:52:12.311255 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:52:12.313478 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:52:12.375992 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:52:12.378677 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:52:12.390852 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:52:12.400820 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:52:12.422530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:52:12.425026 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:52:12.432936 kernel: loop4: detected capacity change from 0 to 147912 Sep 8 23:52:12.453638 kernel: loop5: detected capacity change from 0 to 224512 Sep 8 23:52:12.464901 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:52:12.469854 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:52:12.470690 (sd-merge)[1199]: Merged extensions into '/usr'. Sep 8 23:52:12.550992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:52:12.556776 systemd[1]: Reload requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:52:12.556802 systemd[1]: Reloading... Sep 8 23:52:12.598440 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 8 23:52:12.598461 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 8 23:52:12.644648 zram_generator::config[1230]: No configuration found. Sep 8 23:52:12.840991 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:52:12.882641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:52:12.961506 systemd[1]: Reloading finished in 403 ms. Sep 8 23:52:13.003833 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:52:13.005384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:52:13.006958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:52:13.027924 systemd[1]: Starting ensure-sysext.service... Sep 8 23:52:13.030383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:52:13.043904 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:52:13.043928 systemd[1]: Reloading... Sep 8 23:52:13.064936 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:52:13.065186 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:52:13.066296 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:52:13.066691 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 8 23:52:13.066828 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 8 23:52:13.072138 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:52:13.072159 systemd-tmpfiles[1274]: Skipping /boot Sep 8 23:52:13.096140 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:52:13.096337 systemd-tmpfiles[1274]: Skipping /boot Sep 8 23:52:13.127647 zram_generator::config[1306]: No configuration found. Sep 8 23:52:13.291185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:52:13.366183 systemd[1]: Reloading finished in 321 ms. Sep 8 23:52:13.378106 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:52:13.405131 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:52:13.417490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:52:13.420860 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:52:13.424118 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:52:13.429515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:52:13.433576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:52:13.437953 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:52:13.444103 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.444353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:13.452970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:13.456678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:13.459858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:13.461345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:13.461489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:13.464364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:52:13.466841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.469438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:13.470063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:13.472972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:13.480169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:13.483236 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:52:13.490254 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:13.491207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:13.496377 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 8 23:52:13.503545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.505070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:13.508678 augenrules[1375]: No rules Sep 8 23:52:13.519292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:13.525339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:13.529503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:13.530900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:13.531030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:13.535488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:52:13.536850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.538598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:52:13.547876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:52:13.551051 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:52:13.551740 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:52:13.554323 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:52:13.556352 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:52:13.560170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:13.560434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:13.562500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:13.562901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:13.565270 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:13.565651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:13.567936 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:52:13.601942 systemd[1]: Finished ensure-sysext.service. Sep 8 23:52:13.609090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.616927 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:52:13.618159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:52:13.618635 systemd-resolved[1345]: Positive Trust Anchors: Sep 8 23:52:13.618648 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:52:13.618689 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:52:13.619753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:52:13.622898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:52:13.627447 systemd-resolved[1345]: Defaulting to hostname 'linux'. Sep 8 23:52:13.627583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:52:13.642326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:52:13.643861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:52:13.643908 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:52:13.651900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:52:13.664364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1394) Sep 8 23:52:13.657601 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:52:13.660013 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:52:13.660056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:52:13.660760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:52:13.662631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:52:13.662999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:52:13.665893 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:52:13.666207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:52:13.668246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:52:13.668577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:52:13.680554 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 8 23:52:13.680603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:52:13.684895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:52:13.685001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:52:13.685573 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:52:13.686150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:52:13.703142 augenrules[1419]: /sbin/augenrules: No change Sep 8 23:52:13.712366 augenrules[1451]: No rules Sep 8 23:52:13.717057 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:52:13.717362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:52:13.887388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 8 23:52:13.887501 kernel: ACPI: button: Power Button [PWRF] Sep 8 23:52:13.739010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:52:13.983786 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 8 23:52:13.988834 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 8 23:52:13.992659 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 8 23:52:13.993046 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 8 23:52:14.002963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:52:14.008785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:52:14.011648 kernel: mousedev: PS/2 mouse device common for all mice Sep 8 23:52:14.166172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:52:14.255973 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:52:14.265665 kernel: kvm_amd: TSC scaling supported Sep 8 23:52:14.265920 kernel: kvm_amd: Nested Virtualization enabled Sep 8 23:52:14.265959 kernel: kvm_amd: Nested Paging enabled Sep 8 23:52:14.265988 kernel: kvm_amd: LBR virtualization supported Sep 8 23:52:14.266020 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 8 23:52:14.266063 kernel: kvm_amd: Virtual GIF supported Sep 8 23:52:14.282679 systemd-networkd[1432]: lo: Link UP Sep 8 23:52:14.282696 systemd-networkd[1432]: lo: Gained carrier Sep 8 23:52:14.285760 systemd-networkd[1432]: Enumeration completed Sep 8 23:52:14.286260 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:14.286288 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:52:14.288205 systemd-networkd[1432]: eth0: Link UP Sep 8 23:52:14.288218 systemd-networkd[1432]: eth0: Gained carrier Sep 8 23:52:14.288232 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:52:14.293642 kernel: EDAC MC: Ver: 3.0.0 Sep 8 23:52:14.301045 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:52:14.303054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:52:14.304749 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:52:14.305762 systemd[1]: Reached target network.target - Network. Sep 8 23:52:14.306975 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:52:14.307156 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 8 23:52:14.308160 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:52:14.308216 systemd-timesyncd[1433]: Initial clock synchronization to Mon 2025-09-08 23:52:14.257085 UTC. Sep 8 23:52:14.320134 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:52:14.323669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:52:14.326798 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:52:14.331937 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:52:14.346089 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:52:14.356076 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:52:14.392287 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:52:14.393896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:52:14.395004 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:52:14.396157 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:52:14.397402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:52:14.398867 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:52:14.400086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:52:14.401313 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:52:14.402536 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:52:14.402567 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:52:14.403475 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:52:14.405473 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:52:14.408440 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:52:14.413865 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:52:14.415509 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:52:14.416848 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:52:14.421267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:52:14.422834 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:52:14.425492 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:52:14.427246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:52:14.428473 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:52:14.429524 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:52:14.430554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:52:14.430585 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:52:14.431764 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:52:14.434043 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:52:14.438737 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:52:14.438856 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:52:14.442535 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:52:14.443676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:52:14.448993 jq[1484]: false Sep 8 23:52:14.453888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:52:14.458379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:52:14.464481 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:52:14.468840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:52:14.475220 dbus-daemon[1483]: [system] SELinux support is enabled Sep 8 23:52:14.477210 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:52:14.479492 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:52:14.485333 extend-filesystems[1485]: Found loop3 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found loop4 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found loop5 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found sr0 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda1 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda2 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda3 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found usr Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda4 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda6 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda7 Sep 8 23:52:14.485333 extend-filesystems[1485]: Found vda9 Sep 8 23:52:14.485333 extend-filesystems[1485]: Checking size of /dev/vda9 Sep 8 23:52:14.480319 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:52:14.483832 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:52:14.489335 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:52:14.493497 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:52:14.500995 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:52:14.509363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:52:14.510827 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:52:14.511366 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:52:14.511860 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:52:14.514684 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:52:14.514817 jq[1500]: true Sep 8 23:52:14.515870 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:52:14.527630 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1404) Sep 8 23:52:14.527705 extend-filesystems[1485]: Resized partition /dev/vda9 Sep 8 23:52:14.532888 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:52:14.540503 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:52:14.545659 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:52:14.548187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:52:14.548238 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:52:14.549133 update_engine[1497]: I20250908 23:52:14.549002 1497 main.cc:92] Flatcar Update Engine starting Sep 8 23:52:14.549830 jq[1508]: true Sep 8 23:52:14.550294 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:52:14.550325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:52:14.560588 update_engine[1497]: I20250908 23:52:14.560364 1497 update_check_scheduler.cc:74] Next update check in 6m13s Sep 8 23:52:14.565996 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:52:14.567809 tar[1507]: linux-amd64/LICENSE Sep 8 23:52:14.567809 tar[1507]: linux-amd64/helm Sep 8 23:52:14.569646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:52:14.590681 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:52:14.690368 systemd-logind[1496]: Watching system buttons on /dev/input/event1 (Power Button) Sep 8 23:52:14.690414 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 8 23:52:14.690827 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:52:14.690827 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:52:14.690827 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:52:14.699593 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Sep 8 23:52:14.693752 systemd-logind[1496]: New seat seat0. Sep 8 23:52:14.694330 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:52:14.702713 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:52:14.695248 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:52:14.700845 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:52:14.706757 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:52:14.717540 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:52:14.845186 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:52:14.848064 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:52:14.912398 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:52:14.934325 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:52:14.950435 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:52:14.951018 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:52:14.963045 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:52:15.048976 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:52:15.063298 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:52:15.066933 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 8 23:52:15.068503 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:52:15.277061 containerd[1509]: time="2025-09-08T23:52:15.276564911Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:52:15.325078 containerd[1509]: time="2025-09-08T23:52:15.324977282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.328478 containerd[1509]: time="2025-09-08T23:52:15.328411638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:15.328478 containerd[1509]: time="2025-09-08T23:52:15.328462402Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:52:15.328577 containerd[1509]: time="2025-09-08T23:52:15.328503637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:52:15.328889 containerd[1509]: time="2025-09-08T23:52:15.328847676Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:52:15.328889 containerd[1509]: time="2025-09-08T23:52:15.328878924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329059 containerd[1509]: time="2025-09-08T23:52:15.329025213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329059 containerd[1509]: time="2025-09-08T23:52:15.329047423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329386 containerd[1509]: time="2025-09-08T23:52:15.329350924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329386 containerd[1509]: time="2025-09-08T23:52:15.329372966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329455 containerd[1509]: time="2025-09-08T23:52:15.329390463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329455 containerd[1509]: time="2025-09-08T23:52:15.329404134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.329932 containerd[1509]: time="2025-09-08T23:52:15.329900522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.330256 containerd[1509]: time="2025-09-08T23:52:15.330221730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:52:15.330446 containerd[1509]: time="2025-09-08T23:52:15.330414197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:52:15.330446 containerd[1509]: time="2025-09-08T23:52:15.330435000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:52:15.330696 containerd[1509]: time="2025-09-08T23:52:15.330645404Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:52:15.330794 containerd[1509]: time="2025-09-08T23:52:15.330771108Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:52:15.340108 containerd[1509]: time="2025-09-08T23:52:15.340025676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:52:15.340209 containerd[1509]: time="2025-09-08T23:52:15.340184977Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:52:15.340255 containerd[1509]: time="2025-09-08T23:52:15.340212740Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:52:15.340255 containerd[1509]: time="2025-09-08T23:52:15.340231326Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:52:15.340255 containerd[1509]: time="2025-09-08T23:52:15.340245048Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:52:15.340484 containerd[1509]: time="2025-09-08T23:52:15.340438604Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:52:15.340846 containerd[1509]: time="2025-09-08T23:52:15.340802655Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.340947476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.340964673Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.340978704Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.340991618Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.341005349Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.341016514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341030 containerd[1509]: time="2025-09-08T23:52:15.341029977Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341054055Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341077255Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341101323Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341115284Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341146354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341163821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341182346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341217430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341237394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341251656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341263560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341277082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341289516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341305 containerd[1509]: time="2025-09-08T23:52:15.341304786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341316809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341330822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341344544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341358205Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341378758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341390803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341401459Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341460701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341481823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341492569Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341506221Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341519343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341544051Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:52:15.341812 containerd[1509]: time="2025-09-08T23:52:15.341573213Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:52:15.342449 containerd[1509]: time="2025-09-08T23:52:15.341624845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:52:15.342482 containerd[1509]: time="2025-09-08T23:52:15.342002798Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:52:15.342482 containerd[1509]: time="2025-09-08T23:52:15.342052223Z" level=info msg="Connect containerd service" Sep 8 23:52:15.342482 containerd[1509]: time="2025-09-08T23:52:15.342100869Z" level=info msg="using legacy CRI server" Sep 8 23:52:15.342482 containerd[1509]: time="2025-09-08T23:52:15.342108729Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:52:15.342482 containerd[1509]: time="2025-09-08T23:52:15.342246638Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:52:15.343234 containerd[1509]: time="2025-09-08T23:52:15.343205488Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:52:15.343866 containerd[1509]: time="2025-09-08T23:52:15.343652251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:52:15.343866 containerd[1509]: time="2025-09-08T23:52:15.343684439Z" level=info msg="Start subscribing containerd event" Sep 8 23:52:15.343955 containerd[1509]: time="2025-09-08T23:52:15.343740815Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:52:15.343955 containerd[1509]: time="2025-09-08T23:52:15.343875557Z" level=info msg="Start recovering state" Sep 8 23:52:15.344858 containerd[1509]: time="2025-09-08T23:52:15.344823931Z" level=info msg="Start event monitor" Sep 8 23:52:15.344936 containerd[1509]: time="2025-09-08T23:52:15.344864248Z" level=info msg="Start snapshots syncer" Sep 8 23:52:15.344936 containerd[1509]: time="2025-09-08T23:52:15.344895137Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:52:15.344936 containerd[1509]: time="2025-09-08T23:52:15.344909129Z" level=info msg="Start streaming server" Sep 8 23:52:15.345699 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:52:15.348069 containerd[1509]: time="2025-09-08T23:52:15.347718637Z" level=info msg="containerd successfully booted in 0.072593s" Sep 8 23:52:15.360673 tar[1507]: linux-amd64/README.md Sep 8 23:52:15.384320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:52:15.430836 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:52:15.440931 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:53824.service - OpenSSH per-connection server daemon (10.0.0.1:53824). Sep 8 23:52:15.495114 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 53824 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:15.497801 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:15.512477 systemd-logind[1496]: New session 1 of user core. Sep 8 23:52:15.514248 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:52:15.525939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:52:15.543688 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:52:15.559088 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:52:15.564694 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:52:15.567730 systemd-logind[1496]: New session c1 of user core. Sep 8 23:52:15.739730 systemd[1579]: Queued start job for default target default.target. Sep 8 23:52:15.750152 systemd[1579]: Created slice app.slice - User Application Slice. Sep 8 23:52:15.750181 systemd[1579]: Reached target paths.target - Paths. Sep 8 23:52:15.750229 systemd[1579]: Reached target timers.target - Timers. Sep 8 23:52:15.752093 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:52:15.765766 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:52:15.765970 systemd[1579]: Reached target sockets.target - Sockets. Sep 8 23:52:15.766044 systemd[1579]: Reached target basic.target - Basic System. Sep 8 23:52:15.766106 systemd[1579]: Reached target default.target - Main User Target. Sep 8 23:52:15.766170 systemd[1579]: Startup finished in 186ms. Sep 8 23:52:15.766642 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:52:15.770226 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:52:15.842708 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:53826.service - OpenSSH per-connection server daemon (10.0.0.1:53826). Sep 8 23:52:15.893402 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 53826 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:15.895537 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:15.900999 systemd-logind[1496]: New session 2 of user core. Sep 8 23:52:15.910788 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:52:15.966082 sshd[1592]: Connection closed by 10.0.0.1 port 53826 Sep 8 23:52:15.966530 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:15.989855 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:53826.service: Deactivated successfully. Sep 8 23:52:15.992477 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:52:15.994904 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:52:16.004894 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:53842.service - OpenSSH per-connection server daemon (10.0.0.1:53842). Sep 8 23:52:16.006745 systemd-networkd[1432]: eth0: Gained IPv6LL Sep 8 23:52:16.007213 systemd-logind[1496]: Removed session 2. Sep 8 23:52:16.010130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:52:16.011988 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:52:16.014838 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:52:16.017294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:16.021143 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:52:16.045860 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:52:16.049293 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:52:16.049594 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:52:16.052148 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:52:16.066148 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 53842 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:16.068589 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:16.074659 systemd-logind[1496]: New session 3 of user core. Sep 8 23:52:16.084778 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:52:16.224508 sshd[1618]: Connection closed by 10.0.0.1 port 53842 Sep 8 23:52:16.224982 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:16.230411 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:53842.service: Deactivated successfully. Sep 8 23:52:16.232797 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:52:16.233517 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:52:16.234500 systemd-logind[1496]: Removed session 3. Sep 8 23:52:18.402763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:18.406620 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:52:18.417924 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:18.422442 systemd[1]: Startup finished in 1.076s (kernel) + 8.552s (initrd) + 8.151s (userspace) = 17.780s. Sep 8 23:52:20.406555 kubelet[1628]: E0908 23:52:20.403106 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:20.415070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:20.415358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:20.416015 systemd[1]: kubelet.service: Consumed 3.135s CPU time, 267.7M memory peak. Sep 8 23:52:26.215931 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:40320.service - OpenSSH per-connection server daemon (10.0.0.1:40320). Sep 8 23:52:26.258288 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 40320 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:26.259899 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:26.264175 systemd-logind[1496]: New session 4 of user core. Sep 8 23:52:26.286746 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:52:26.340779 sshd[1643]: Connection closed by 10.0.0.1 port 40320 Sep 8 23:52:26.341157 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:26.350648 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:40320.service: Deactivated successfully. Sep 8 23:52:26.352853 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:52:26.354585 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:52:26.368874 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:40336.service - OpenSSH per-connection server daemon (10.0.0.1:40336). Sep 8 23:52:26.369857 systemd-logind[1496]: Removed session 4. Sep 8 23:52:26.407418 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 40336 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:26.408905 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:26.413316 systemd-logind[1496]: New session 5 of user core. Sep 8 23:52:26.423750 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:52:26.473222 sshd[1651]: Connection closed by 10.0.0.1 port 40336 Sep 8 23:52:26.473592 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:26.489377 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:40336.service: Deactivated successfully. Sep 8 23:52:26.491501 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:52:26.493155 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:52:26.502934 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:40346.service - OpenSSH per-connection server daemon (10.0.0.1:40346). Sep 8 23:52:26.503862 systemd-logind[1496]: Removed session 5. Sep 8 23:52:26.541212 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 40346 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:26.542703 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:26.547370 systemd-logind[1496]: New session 6 of user core. Sep 8 23:52:26.558788 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:52:26.612388 sshd[1659]: Connection closed by 10.0.0.1 port 40346 Sep 8 23:52:26.612748 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:26.625819 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:40346.service: Deactivated successfully. Sep 8 23:52:26.628107 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:52:26.629882 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:52:26.639863 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:40358.service - OpenSSH per-connection server daemon (10.0.0.1:40358). Sep 8 23:52:26.640827 systemd-logind[1496]: Removed session 6. Sep 8 23:52:26.678288 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 40358 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:26.679806 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:26.684210 systemd-logind[1496]: New session 7 of user core. Sep 8 23:52:26.699758 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:52:26.936384 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:52:26.936856 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:52:26.957496 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 8 23:52:26.958986 sshd[1667]: Connection closed by 10.0.0.1 port 40358 Sep 8 23:52:26.959357 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:26.972278 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:40358.service: Deactivated successfully. Sep 8 23:52:26.974090 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:52:26.975498 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:52:26.976970 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:40374.service - OpenSSH per-connection server daemon (10.0.0.1:40374). Sep 8 23:52:26.977685 systemd-logind[1496]: Removed session 7. Sep 8 23:52:27.018815 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 40374 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:27.020210 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:27.024762 systemd-logind[1496]: New session 8 of user core. Sep 8 23:52:27.042748 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:52:27.097162 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:52:27.097498 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:52:27.101269 sudo[1678]: pam_unix(sudo:session): session closed for user root Sep 8 23:52:27.107772 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:52:27.108105 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:52:27.131886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:52:27.167991 augenrules[1700]: No rules Sep 8 23:52:27.170011 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:52:27.170311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:52:27.171577 sudo[1677]: pam_unix(sudo:session): session closed for user root Sep 8 23:52:27.173098 sshd[1676]: Connection closed by 10.0.0.1 port 40374 Sep 8 23:52:27.173486 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 8 23:52:27.186299 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:40374.service: Deactivated successfully. Sep 8 23:52:27.188089 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:52:27.189659 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:52:27.197898 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:40390.service - OpenSSH per-connection server daemon (10.0.0.1:40390). Sep 8 23:52:27.198759 systemd-logind[1496]: Removed session 8. Sep 8 23:52:27.236980 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 40390 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:52:27.238677 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:52:27.243356 systemd-logind[1496]: New session 9 of user core. Sep 8 23:52:27.256796 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:52:27.311457 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:52:27.311845 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:52:28.207151 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:52:28.207182 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:52:28.723890 dockerd[1732]: time="2025-09-08T23:52:28.723801721Z" level=info msg="Starting up" Sep 8 23:52:29.357479 dockerd[1732]: time="2025-09-08T23:52:29.357384613Z" level=info msg="Loading containers: start." Sep 8 23:52:29.585650 kernel: Initializing XFRM netlink socket Sep 8 23:52:29.687085 systemd-networkd[1432]: docker0: Link UP Sep 8 23:52:29.726113 dockerd[1732]: time="2025-09-08T23:52:29.726051202Z" level=info msg="Loading containers: done." Sep 8 23:52:29.744980 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1909713994-merged.mount: Deactivated successfully. Sep 8 23:52:29.794712 dockerd[1732]: time="2025-09-08T23:52:29.794640940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:52:29.794867 dockerd[1732]: time="2025-09-08T23:52:29.794791258Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:52:29.794984 dockerd[1732]: time="2025-09-08T23:52:29.794958296Z" level=info msg="Daemon has completed initialization" Sep 8 23:52:30.134931 dockerd[1732]: time="2025-09-08T23:52:30.134865287Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:52:30.135106 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:52:30.474675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:52:30.557954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:30.813234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:30.817537 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:31.043693 kubelet[1937]: E0908 23:52:31.043588 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:31.051296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:31.051559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:31.052110 systemd[1]: kubelet.service: Consumed 483ms CPU time, 111.3M memory peak. Sep 8 23:52:31.494726 containerd[1509]: time="2025-09-08T23:52:31.493839905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 8 23:52:38.318387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487415638.mount: Deactivated successfully. Sep 8 23:52:40.510202 containerd[1509]: time="2025-09-08T23:52:40.510133504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:40.511185 containerd[1509]: time="2025-09-08T23:52:40.511138351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 8 23:52:40.512495 containerd[1509]: time="2025-09-08T23:52:40.512453765Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:40.518579 containerd[1509]: time="2025-09-08T23:52:40.518534825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:40.519895 containerd[1509]: time="2025-09-08T23:52:40.519845412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 9.025904655s" Sep 8 23:52:40.519961 containerd[1509]: time="2025-09-08T23:52:40.519902494Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 8 23:52:40.520874 containerd[1509]: time="2025-09-08T23:52:40.520834370Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 8 23:52:41.223861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 8 23:52:41.239855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:41.431814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:41.432103 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:41.582360 kubelet[2009]: E0908 23:52:41.582178 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:41.587411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:41.587662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:41.588094 systemd[1]: kubelet.service: Consumed 250ms CPU time, 112.8M memory peak. Sep 8 23:52:43.189924 containerd[1509]: time="2025-09-08T23:52:43.189857986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:43.190753 containerd[1509]: time="2025-09-08T23:52:43.190678246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 8 23:52:43.191894 containerd[1509]: time="2025-09-08T23:52:43.191859364Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:43.194699 containerd[1509]: time="2025-09-08T23:52:43.194660184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:43.196033 containerd[1509]: time="2025-09-08T23:52:43.196006756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.675140384s" Sep 8 23:52:43.196128 containerd[1509]: time="2025-09-08T23:52:43.196035815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 8 23:52:43.196837 containerd[1509]: time="2025-09-08T23:52:43.196627737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 8 23:52:44.436537 containerd[1509]: time="2025-09-08T23:52:44.436459469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:44.437649 containerd[1509]: time="2025-09-08T23:52:44.437154051Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 8 23:52:44.438357 containerd[1509]: time="2025-09-08T23:52:44.438307619Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:44.441532 containerd[1509]: time="2025-09-08T23:52:44.441490281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:44.442624 containerd[1509]: time="2025-09-08T23:52:44.442561035Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.24590546s" Sep 8 23:52:44.442624 containerd[1509]: time="2025-09-08T23:52:44.442617262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 8 23:52:44.443438 containerd[1509]: time="2025-09-08T23:52:44.443147748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:52:45.591975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533614215.mount: Deactivated successfully. Sep 8 23:52:46.423297 containerd[1509]: time="2025-09-08T23:52:46.423229910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:46.424176 containerd[1509]: time="2025-09-08T23:52:46.424131624Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 8 23:52:46.425452 containerd[1509]: time="2025-09-08T23:52:46.425401369Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:46.428469 containerd[1509]: time="2025-09-08T23:52:46.428397949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:46.429619 containerd[1509]: time="2025-09-08T23:52:46.429550745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.986355982s" Sep 8 23:52:46.429668 containerd[1509]: time="2025-09-08T23:52:46.429630671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 8 23:52:46.430245 containerd[1509]: time="2025-09-08T23:52:46.430208496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:52:47.139139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203562035.mount: Deactivated successfully. Sep 8 23:52:50.678882 containerd[1509]: time="2025-09-08T23:52:50.678779652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.679520 containerd[1509]: time="2025-09-08T23:52:50.679433651Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 8 23:52:50.680883 containerd[1509]: time="2025-09-08T23:52:50.680794100Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.684058 containerd[1509]: time="2025-09-08T23:52:50.684008580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:50.685211 containerd[1509]: time="2025-09-08T23:52:50.685158414Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.25491737s" Sep 8 23:52:50.685211 containerd[1509]: time="2025-09-08T23:52:50.685195852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 8 23:52:50.685771 containerd[1509]: time="2025-09-08T23:52:50.685732180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:52:51.724093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 8 23:52:51.743970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:52:52.045101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:52:52.069420 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:52:52.192691 kubelet[2093]: E0908 23:52:52.191338 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:52:52.203433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:52:52.203712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:52:52.204211 systemd[1]: kubelet.service: Consumed 336ms CPU time, 112.4M memory peak. Sep 8 23:52:52.693117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801345040.mount: Deactivated successfully. Sep 8 23:52:52.703381 containerd[1509]: time="2025-09-08T23:52:52.703295412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:52.704251 containerd[1509]: time="2025-09-08T23:52:52.704153781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 8 23:52:52.706012 containerd[1509]: time="2025-09-08T23:52:52.705891753Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:52.713025 containerd[1509]: time="2025-09-08T23:52:52.712892097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:52:52.714826 containerd[1509]: time="2025-09-08T23:52:52.713816140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.028050387s" Sep 8 23:52:52.714826 containerd[1509]: time="2025-09-08T23:52:52.713866230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 8 23:52:52.715520 containerd[1509]: time="2025-09-08T23:52:52.715245002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 8 23:52:53.907593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766314951.mount: Deactivated successfully. Sep 8 23:52:59.672860 update_engine[1497]: I20250908 23:52:59.672708 1497 update_attempter.cc:509] Updating boot flags... Sep 8 23:52:59.717696 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2166) Sep 8 23:52:59.767267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2170) Sep 8 23:52:59.797776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2170) Sep 8 23:53:00.069236 containerd[1509]: time="2025-09-08T23:53:00.069141008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.072475 containerd[1509]: time="2025-09-08T23:53:00.072407735Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 8 23:53:00.076473 containerd[1509]: time="2025-09-08T23:53:00.076411265Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.112641 containerd[1509]: time="2025-09-08T23:53:00.112557135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:00.114730 containerd[1509]: time="2025-09-08T23:53:00.114650175Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.399373932s" Sep 8 23:53:00.114730 containerd[1509]: time="2025-09-08T23:53:00.114714825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 8 23:53:02.223888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 8 23:53:02.235845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:02.424833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:02.429124 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:02.430441 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:53:02.430803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:02.431039 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.2M memory peak. Sep 8 23:53:02.433499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:02.461008 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-9.scope)... Sep 8 23:53:02.461028 systemd[1]: Reloading... Sep 8 23:53:02.650647 zram_generator::config[2263]: No configuration found. Sep 8 23:53:04.514012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:04.633377 systemd[1]: Reloading finished in 2171 ms. Sep 8 23:53:04.693932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:04.700404 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:53:04.704337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:04.706504 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:53:04.706935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:04.707011 systemd[1]: kubelet.service: Consumed 281ms CPU time, 99.3M memory peak. Sep 8 23:53:04.720988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:04.905479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:04.911452 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:53:04.954266 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:04.954266 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:53:04.954266 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:04.954741 kubelet[2311]: I0908 23:53:04.954349 2311 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:53:07.601653 kubelet[2311]: I0908 23:53:07.601562 2311 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:53:07.601653 kubelet[2311]: I0908 23:53:07.601635 2311 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:53:07.602321 kubelet[2311]: I0908 23:53:07.602070 2311 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:53:07.624911 kubelet[2311]: E0908 23:53:07.624829 2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:07.626274 kubelet[2311]: I0908 23:53:07.626242 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:53:07.631870 kubelet[2311]: E0908 23:53:07.631823 2311 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:53:07.631870 kubelet[2311]: I0908 23:53:07.631862 2311 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:53:07.637680 kubelet[2311]: I0908 23:53:07.637648 2311 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:53:07.639076 kubelet[2311]: I0908 23:53:07.639022 2311 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:53:07.639298 kubelet[2311]: I0908 23:53:07.639069 2311 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:53:07.639409 kubelet[2311]: I0908 23:53:07.639309 2311 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:53:07.639409 kubelet[2311]: I0908 23:53:07.639322 2311 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:53:07.639519 kubelet[2311]: I0908 23:53:07.639501 2311 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:07.644428 kubelet[2311]: I0908 23:53:07.644383 2311 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:53:07.644642 kubelet[2311]: I0908 23:53:07.644436 2311 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:53:07.644642 kubelet[2311]: I0908 23:53:07.644463 2311 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:53:07.644642 kubelet[2311]: I0908 23:53:07.644478 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:53:07.648044 kubelet[2311]: I0908 23:53:07.647687 2311 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:53:07.648170 kubelet[2311]: W0908 23:53:07.648119 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:07.648224 kubelet[2311]: I0908 23:53:07.648187 2311 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:53:07.648224 kubelet[2311]: E0908 23:53:07.648188 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:07.648295 kubelet[2311]: W0908 23:53:07.648279 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:53:07.648702 kubelet[2311]: W0908 23:53:07.648662 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:07.648800 kubelet[2311]: E0908 23:53:07.648709 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:07.650776 kubelet[2311]: I0908 23:53:07.650746 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:53:07.650862 kubelet[2311]: I0908 23:53:07.650806 2311 server.go:1287] "Started kubelet" Sep 8 23:53:07.652501 kubelet[2311]: I0908 23:53:07.652457 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:53:07.656646 kubelet[2311]: I0908 23:53:07.655765 2311 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:53:07.656646 kubelet[2311]: E0908 23:53:07.656025 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:07.656646 kubelet[2311]: I0908 23:53:07.656075 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:53:07.656646 kubelet[2311]: I0908 23:53:07.656259 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:53:07.656646 kubelet[2311]: I0908 23:53:07.656363 2311 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:53:07.656921 kubelet[2311]: W0908 23:53:07.656701 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:07.656921 kubelet[2311]: E0908 23:53:07.656754 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:07.657827 kubelet[2311]: E0908 23:53:07.657011 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Sep 8 23:53:07.657827 kubelet[2311]: I0908 23:53:07.657133 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:53:07.657827 kubelet[2311]: I0908 23:53:07.657540 2311 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:53:07.658066 kubelet[2311]: E0908 23:53:07.656431 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186373c936de0bac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:53:07.650767788 +0000 UTC m=+2.734518256,LastTimestamp:2025-09-08 23:53:07.650767788 +0000 UTC m=+2.734518256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:53:07.658393 kubelet[2311]: I0908 23:53:07.658365 2311 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:53:07.658583 kubelet[2311]: I0908 23:53:07.658555 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:53:07.659512 kubelet[2311]: I0908 23:53:07.659483 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:53:07.661392 kubelet[2311]: I0908 23:53:07.661355 2311 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:53:07.661392 kubelet[2311]: I0908 23:53:07.661382 2311 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:53:07.680688 kubelet[2311]: I0908 23:53:07.680651 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:53:07.680901 kubelet[2311]: I0908 23:53:07.680880 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:53:07.680980 kubelet[2311]: I0908 23:53:07.680908 2311 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:07.723931 kubelet[2311]: I0908 23:53:07.723875 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:53:07.725483 kubelet[2311]: I0908 23:53:07.725459 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:53:07.725552 kubelet[2311]: I0908 23:53:07.725502 2311 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:53:07.725552 kubelet[2311]: I0908 23:53:07.725540 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:53:07.725597 kubelet[2311]: I0908 23:53:07.725555 2311 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:53:07.725698 kubelet[2311]: E0908 23:53:07.725649 2311 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:53:07.726337 kubelet[2311]: W0908 23:53:07.726291 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:07.726404 kubelet[2311]: E0908 23:53:07.726351 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:07.727856 kubelet[2311]: I0908 23:53:07.726905 2311 policy_none.go:49] "None policy: Start" Sep 8 23:53:07.727856 kubelet[2311]: I0908 23:53:07.726948 2311 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:53:07.727856 kubelet[2311]: I0908 23:53:07.726965 2311 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:53:07.756215 kubelet[2311]: E0908 23:53:07.756148 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:07.826061 kubelet[2311]: E0908 23:53:07.826007 2311 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:53:07.831502 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:53:07.847526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:53:07.851212 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:53:07.857342 kubelet[2311]: E0908 23:53:07.857227 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:07.857778 kubelet[2311]: E0908 23:53:07.857726 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Sep 8 23:53:07.862967 kubelet[2311]: I0908 23:53:07.862947 2311 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:53:07.863262 kubelet[2311]: I0908 23:53:07.863188 2311 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:53:07.863262 kubelet[2311]: I0908 23:53:07.863204 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:53:07.863452 kubelet[2311]: I0908 23:53:07.863439 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:53:07.865076 kubelet[2311]: E0908 23:53:07.865018 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:53:07.865076 kubelet[2311]: E0908 23:53:07.865060 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:53:07.965510 kubelet[2311]: I0908 23:53:07.965465 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:07.965957 kubelet[2311]: E0908 23:53:07.965926 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 8 23:53:08.040585 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 8 23:53:08.059130 kubelet[2311]: I0908 23:53:08.059041 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:08.059130 kubelet[2311]: I0908 23:53:08.059109 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:08.059130 kubelet[2311]: I0908 23:53:08.059146 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:08.059511 kubelet[2311]: I0908 23:53:08.059175 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:08.059511 kubelet[2311]: I0908 23:53:08.059200 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:08.059511 kubelet[2311]: I0908 23:53:08.059336 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:08.059511 kubelet[2311]: I0908 23:53:08.059354 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:08.059511 kubelet[2311]: I0908 23:53:08.059375 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:08.059776 kubelet[2311]: I0908 23:53:08.059399 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:08.063919 kubelet[2311]: E0908 23:53:08.063868 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:08.067711 systemd[1]: Created slice kubepods-burstable-pod143e4075d0f6c7c79acc3a74244d456c.slice - libcontainer container kubepods-burstable-pod143e4075d0f6c7c79acc3a74244d456c.slice. Sep 8 23:53:08.077842 kubelet[2311]: E0908 23:53:08.077783 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:08.081237 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 8 23:53:08.083349 kubelet[2311]: E0908 23:53:08.083308 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:08.171031 kubelet[2311]: I0908 23:53:08.170365 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:08.171031 kubelet[2311]: E0908 23:53:08.170741 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 8 23:53:08.258899 kubelet[2311]: E0908 23:53:08.258831 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Sep 8 23:53:08.364880 kubelet[2311]: E0908 23:53:08.364812 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:08.365804 containerd[1509]: time="2025-09-08T23:53:08.365740900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:08.378989 kubelet[2311]: E0908 23:53:08.378924 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:08.379516 containerd[1509]: time="2025-09-08T23:53:08.379468116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:143e4075d0f6c7c79acc3a74244d456c,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:08.384780 kubelet[2311]: E0908 23:53:08.384741 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:08.385118 containerd[1509]: time="2025-09-08T23:53:08.385084986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:08.573013 kubelet[2311]: I0908 23:53:08.572887 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:08.573626 kubelet[2311]: E0908 23:53:08.573522 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 8 23:53:08.637437 kubelet[2311]: W0908 23:53:08.637354 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:08.637437 kubelet[2311]: E0908 23:53:08.637429 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:08.742131 kubelet[2311]: W0908 23:53:08.742044 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:08.742131 kubelet[2311]: E0908 23:53:08.742128 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:08.768531 kubelet[2311]: W0908 23:53:08.768446 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:08.768662 kubelet[2311]: E0908 23:53:08.768545 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:09.060418 kubelet[2311]: E0908 23:53:09.060257 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Sep 8 23:53:09.116347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126390369.mount: Deactivated successfully. Sep 8 23:53:09.124270 containerd[1509]: time="2025-09-08T23:53:09.124199823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:09.130324 containerd[1509]: time="2025-09-08T23:53:09.130255229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 8 23:53:09.132285 containerd[1509]: time="2025-09-08T23:53:09.132211858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:09.134165 containerd[1509]: time="2025-09-08T23:53:09.134055105Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:09.137983 containerd[1509]: time="2025-09-08T23:53:09.137940422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:53:09.139128 containerd[1509]: time="2025-09-08T23:53:09.139077634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:09.140059 containerd[1509]: time="2025-09-08T23:53:09.140031330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:53:09.140716 containerd[1509]: time="2025-09-08T23:53:09.140633400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:53:09.141217 containerd[1509]: time="2025-09-08T23:53:09.141172128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 775.289567ms" Sep 8 23:53:09.144715 containerd[1509]: time="2025-09-08T23:53:09.144686727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.097586ms" Sep 8 23:53:09.146964 containerd[1509]: time="2025-09-08T23:53:09.146930405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 761.767792ms" Sep 8 23:53:09.159974 kubelet[2311]: W0908 23:53:09.159920 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 8 23:53:09.160066 kubelet[2311]: E0908 23:53:09.159982 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:09.383557 kubelet[2311]: I0908 23:53:09.383420 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:09.383900 kubelet[2311]: E0908 23:53:09.383860 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 8 23:53:09.411935 containerd[1509]: time="2025-09-08T23:53:09.411101264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:09.411935 containerd[1509]: time="2025-09-08T23:53:09.411908669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:09.411935 containerd[1509]: time="2025-09-08T23:53:09.411925730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.412538 containerd[1509]: time="2025-09-08T23:53:09.412029545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.416646 containerd[1509]: time="2025-09-08T23:53:09.414284463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:09.416646 containerd[1509]: time="2025-09-08T23:53:09.414335274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:09.416646 containerd[1509]: time="2025-09-08T23:53:09.414349018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.416646 containerd[1509]: time="2025-09-08T23:53:09.414433729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.417126 containerd[1509]: time="2025-09-08T23:53:09.417013396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:09.417126 containerd[1509]: time="2025-09-08T23:53:09.417098668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:09.417126 containerd[1509]: time="2025-09-08T23:53:09.417114155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.417261 containerd[1509]: time="2025-09-08T23:53:09.417197152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:09.475186 systemd[1]: Started cri-containerd-bfea8a80258329ed41600846d952c1bd6271ea683a38a9a9543885afc56fab24.scope - libcontainer container bfea8a80258329ed41600846d952c1bd6271ea683a38a9a9543885afc56fab24. Sep 8 23:53:09.482504 systemd[1]: Started cri-containerd-5ae11fa2dad0e97bd3c51983d7b7edcf43f1cc4559db0484a57cbd7cae8c27c5.scope - libcontainer container 5ae11fa2dad0e97bd3c51983d7b7edcf43f1cc4559db0484a57cbd7cae8c27c5. Sep 8 23:53:09.485278 systemd[1]: Started cri-containerd-d96e3f5b1977d26169453a1dbdcdecac066b1e75aaff57d5aec94d06558a28e5.scope - libcontainer container d96e3f5b1977d26169453a1dbdcdecac066b1e75aaff57d5aec94d06558a28e5. Sep 8 23:53:09.546214 containerd[1509]: time="2025-09-08T23:53:09.546173079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfea8a80258329ed41600846d952c1bd6271ea683a38a9a9543885afc56fab24\"" Sep 8 23:53:09.548377 kubelet[2311]: E0908 23:53:09.548351 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:09.550422 containerd[1509]: time="2025-09-08T23:53:09.550361155Z" level=info msg="CreateContainer within sandbox \"bfea8a80258329ed41600846d952c1bd6271ea683a38a9a9543885afc56fab24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:53:09.551953 containerd[1509]: time="2025-09-08T23:53:09.551933430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"d96e3f5b1977d26169453a1dbdcdecac066b1e75aaff57d5aec94d06558a28e5\"" Sep 8 23:53:09.552485 kubelet[2311]: E0908 23:53:09.552455 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:09.554558 containerd[1509]: time="2025-09-08T23:53:09.554233499Z" level=info msg="CreateContainer within sandbox \"d96e3f5b1977d26169453a1dbdcdecac066b1e75aaff57d5aec94d06558a28e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:53:09.555642 containerd[1509]: time="2025-09-08T23:53:09.555617319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:143e4075d0f6c7c79acc3a74244d456c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae11fa2dad0e97bd3c51983d7b7edcf43f1cc4559db0484a57cbd7cae8c27c5\"" Sep 8 23:53:09.556257 kubelet[2311]: E0908 23:53:09.556107 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:09.561413 containerd[1509]: time="2025-09-08T23:53:09.561334774Z" level=info msg="CreateContainer within sandbox \"5ae11fa2dad0e97bd3c51983d7b7edcf43f1cc4559db0484a57cbd7cae8c27c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:53:09.583682 containerd[1509]: time="2025-09-08T23:53:09.583623812Z" level=info msg="CreateContainer within sandbox \"d96e3f5b1977d26169453a1dbdcdecac066b1e75aaff57d5aec94d06558a28e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"409a24197b0b37be2759d47db44e4779ae61fa01338d602d1f01ead21e0a7ab6\"" Sep 8 23:53:09.584444 containerd[1509]: time="2025-09-08T23:53:09.584417573Z" level=info msg="StartContainer for \"409a24197b0b37be2759d47db44e4779ae61fa01338d602d1f01ead21e0a7ab6\"" Sep 8 23:53:09.585825 containerd[1509]: time="2025-09-08T23:53:09.585762615Z" level=info msg="CreateContainer within sandbox \"bfea8a80258329ed41600846d952c1bd6271ea683a38a9a9543885afc56fab24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b0bd3df83a423b15fadb320d2ef08047da1e0e7e51c880733a4d3a172508d98\"" Sep 8 23:53:09.586336 containerd[1509]: time="2025-09-08T23:53:09.586278462Z" level=info msg="StartContainer for \"9b0bd3df83a423b15fadb320d2ef08047da1e0e7e51c880733a4d3a172508d98\"" Sep 8 23:53:09.595781 containerd[1509]: time="2025-09-08T23:53:09.595715550Z" level=info msg="CreateContainer within sandbox \"5ae11fa2dad0e97bd3c51983d7b7edcf43f1cc4559db0484a57cbd7cae8c27c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52842e7b796da8ae64087dd2cdcea5a2caf40575ad2b057ca825564f87a7ae7c\"" Sep 8 23:53:09.596474 containerd[1509]: time="2025-09-08T23:53:09.596432494Z" level=info msg="StartContainer for \"52842e7b796da8ae64087dd2cdcea5a2caf40575ad2b057ca825564f87a7ae7c\"" Sep 8 23:53:09.623872 systemd[1]: Started cri-containerd-9b0bd3df83a423b15fadb320d2ef08047da1e0e7e51c880733a4d3a172508d98.scope - libcontainer container 9b0bd3df83a423b15fadb320d2ef08047da1e0e7e51c880733a4d3a172508d98. Sep 8 23:53:09.631793 systemd[1]: Started cri-containerd-409a24197b0b37be2759d47db44e4779ae61fa01338d602d1f01ead21e0a7ab6.scope - libcontainer container 409a24197b0b37be2759d47db44e4779ae61fa01338d602d1f01ead21e0a7ab6. Sep 8 23:53:09.638243 kubelet[2311]: E0908 23:53:09.638102 2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:53:09.639370 systemd[1]: Started cri-containerd-52842e7b796da8ae64087dd2cdcea5a2caf40575ad2b057ca825564f87a7ae7c.scope - libcontainer container 52842e7b796da8ae64087dd2cdcea5a2caf40575ad2b057ca825564f87a7ae7c. Sep 8 23:53:10.027020 containerd[1509]: time="2025-09-08T23:53:10.026941827Z" level=info msg="StartContainer for \"9b0bd3df83a423b15fadb320d2ef08047da1e0e7e51c880733a4d3a172508d98\" returns successfully" Sep 8 23:53:10.027196 containerd[1509]: time="2025-09-08T23:53:10.027169533Z" level=info msg="StartContainer for \"52842e7b796da8ae64087dd2cdcea5a2caf40575ad2b057ca825564f87a7ae7c\" returns successfully" Sep 8 23:53:10.027228 containerd[1509]: time="2025-09-08T23:53:10.027210255Z" level=info msg="StartContainer for \"409a24197b0b37be2759d47db44e4779ae61fa01338d602d1f01ead21e0a7ab6\" returns successfully" Sep 8 23:53:10.821200 kubelet[2311]: E0908 23:53:10.821149 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:10.821806 kubelet[2311]: E0908 23:53:10.821298 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:10.821806 kubelet[2311]: E0908 23:53:10.821667 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:10.821806 kubelet[2311]: E0908 23:53:10.821748 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:10.823737 kubelet[2311]: E0908 23:53:10.823708 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:10.823818 kubelet[2311]: E0908 23:53:10.823797 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:10.986306 kubelet[2311]: I0908 23:53:10.985911 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:11.374118 kubelet[2311]: E0908 23:53:11.374074 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826100 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826156 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826226 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826227 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826281 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:11.826342 kubelet[2311]: E0908 23:53:11.826334 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:11.905938 kubelet[2311]: I0908 23:53:11.905885 2311 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:53:11.905938 kubelet[2311]: E0908 23:53:11.905923 2311 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:53:11.957437 kubelet[2311]: I0908 23:53:11.957373 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:12.703715 kubelet[2311]: E0908 23:53:12.703646 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:12.703715 kubelet[2311]: I0908 23:53:12.703692 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:12.705434 kubelet[2311]: E0908 23:53:12.705405 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:12.705434 kubelet[2311]: I0908 23:53:12.705433 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:12.707141 kubelet[2311]: E0908 23:53:12.707069 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:12.798790 kubelet[2311]: I0908 23:53:12.798741 2311 apiserver.go:52] "Watching apiserver" Sep 8 23:53:12.826342 kubelet[2311]: I0908 23:53:12.826304 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:12.826527 kubelet[2311]: I0908 23:53:12.826455 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:12.827846 kubelet[2311]: E0908 23:53:12.827827 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:12.828018 kubelet[2311]: E0908 23:53:12.827976 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.828544 kubelet[2311]: E0908 23:53:12.828524 2311 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:12.828653 kubelet[2311]: E0908 23:53:12.828636 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:12.856949 kubelet[2311]: I0908 23:53:12.856921 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:53:14.756811 kubelet[2311]: I0908 23:53:14.756774 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:14.817192 kubelet[2311]: E0908 23:53:14.817144 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:14.830621 kubelet[2311]: E0908 23:53:14.830572 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:15.953176 kubelet[2311]: I0908 23:53:15.953096 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:15.984837 kubelet[2311]: E0908 23:53:15.984773 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:16.879363 kubelet[2311]: E0908 23:53:16.879316 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:17.058734 kubelet[2311]: I0908 23:53:17.058677 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:17.240769 kubelet[2311]: E0908 23:53:17.240702 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:17.734828 systemd[1]: Reload requested from client PID 2589 ('systemctl') (unit session-9.scope)... Sep 8 23:53:17.735404 systemd[1]: Reloading... Sep 8 23:53:17.886765 kubelet[2311]: E0908 23:53:17.886638 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:17.924332 kubelet[2311]: I0908 23:53:17.922841 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.922798262 podStartE2EDuration="2.922798262s" podCreationTimestamp="2025-09-08 23:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:17.855023352 +0000 UTC m=+12.938773820" watchObservedRunningTime="2025-09-08 23:53:17.922798262 +0000 UTC m=+13.006548731" Sep 8 23:53:17.989110 kubelet[2311]: I0908 23:53:17.987151 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.987122273 podStartE2EDuration="987.122273ms" podCreationTimestamp="2025-09-08 23:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:17.92319337 +0000 UTC m=+13.006943849" watchObservedRunningTime="2025-09-08 23:53:17.987122273 +0000 UTC m=+13.070872741" Sep 8 23:53:18.105737 zram_generator::config[2635]: No configuration found. Sep 8 23:53:18.436132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:18.754018 systemd[1]: Reloading finished in 1018 ms. Sep 8 23:53:18.819362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:18.840245 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:53:18.840790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:18.840888 systemd[1]: kubelet.service: Consumed 1.173s CPU time, 135M memory peak. Sep 8 23:53:18.883177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:19.407581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:19.416207 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:53:19.586418 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:19.586418 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:53:19.586418 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:53:19.596577 kubelet[2680]: I0908 23:53:19.596401 2680 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:53:19.618568 kubelet[2680]: I0908 23:53:19.617053 2680 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:53:19.618568 kubelet[2680]: I0908 23:53:19.617087 2680 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:53:19.618568 kubelet[2680]: I0908 23:53:19.617442 2680 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:53:19.621656 kubelet[2680]: I0908 23:53:19.619921 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:53:19.632128 kubelet[2680]: I0908 23:53:19.631924 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:53:19.647650 kubelet[2680]: E0908 23:53:19.646110 2680 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:53:19.647650 kubelet[2680]: I0908 23:53:19.646150 2680 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:53:19.663807 kubelet[2680]: I0908 23:53:19.663623 2680 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:53:19.664595 kubelet[2680]: I0908 23:53:19.663962 2680 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:53:19.664595 kubelet[2680]: I0908 23:53:19.664022 2680 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:53:19.664595 kubelet[2680]: I0908 23:53:19.664268 2680 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:53:19.664595 kubelet[2680]: I0908 23:53:19.664280 2680 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:53:19.664856 kubelet[2680]: I0908 23:53:19.664350 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:19.664856 kubelet[2680]: I0908 23:53:19.664571 2680 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:53:19.664856 kubelet[2680]: I0908 23:53:19.664634 2680 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:53:19.664856 kubelet[2680]: I0908 23:53:19.664665 2680 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:53:19.664856 kubelet[2680]: I0908 23:53:19.664679 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:53:19.671640 kubelet[2680]: I0908 23:53:19.667079 2680 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:53:19.671640 kubelet[2680]: I0908 23:53:19.669548 2680 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:53:19.671640 kubelet[2680]: I0908 23:53:19.670408 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:53:19.671640 kubelet[2680]: I0908 23:53:19.670469 2680 server.go:1287] "Started kubelet" Sep 8 23:53:19.671640 kubelet[2680]: I0908 23:53:19.671043 2680 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:53:19.672351 kubelet[2680]: I0908 23:53:19.672291 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:53:19.673041 kubelet[2680]: I0908 23:53:19.673013 2680 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:53:19.673041 kubelet[2680]: I0908 23:53:19.672998 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:53:19.673403 kubelet[2680]: I0908 23:53:19.673372 2680 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:53:19.687057 kubelet[2680]: I0908 23:53:19.686426 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:53:19.687057 kubelet[2680]: E0908 23:53:19.686633 2680 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:53:19.688494 kubelet[2680]: I0908 23:53:19.688461 2680 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:53:19.688821 kubelet[2680]: I0908 23:53:19.688694 2680 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:53:19.689134 kubelet[2680]: I0908 23:53:19.689065 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:53:19.694202 kubelet[2680]: I0908 23:53:19.693265 2680 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:53:19.694202 kubelet[2680]: I0908 23:53:19.693430 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:53:19.695743 kubelet[2680]: I0908 23:53:19.695710 2680 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:53:19.717563 kubelet[2680]: I0908 23:53:19.717012 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:53:19.730400 kubelet[2680]: I0908 23:53:19.730355 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:53:19.730664 kubelet[2680]: I0908 23:53:19.730650 2680 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:53:19.730977 kubelet[2680]: I0908 23:53:19.730960 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:53:19.731055 kubelet[2680]: I0908 23:53:19.731044 2680 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:53:19.731223 kubelet[2680]: E0908 23:53:19.731191 2680 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:53:19.815342 kubelet[2680]: I0908 23:53:19.815287 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:53:19.815342 kubelet[2680]: I0908 23:53:19.815316 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:53:19.815342 kubelet[2680]: I0908 23:53:19.815353 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:53:19.815669 kubelet[2680]: I0908 23:53:19.815646 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:53:19.815712 kubelet[2680]: I0908 23:53:19.815664 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:53:19.815712 kubelet[2680]: I0908 23:53:19.815688 2680 policy_none.go:49] "None policy: Start" Sep 8 23:53:19.815712 kubelet[2680]: I0908 23:53:19.815700 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:53:19.815712 kubelet[2680]: I0908 23:53:19.815714 2680 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:53:19.815850 kubelet[2680]: I0908 23:53:19.815835 2680 state_mem.go:75] "Updated machine memory state" Sep 8 23:53:19.822677 kubelet[2680]: I0908 23:53:19.820968 2680 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:53:19.822677 kubelet[2680]: I0908 23:53:19.821260 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:53:19.822677 kubelet[2680]: I0908 23:53:19.821274 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:53:19.822677 kubelet[2680]: I0908 23:53:19.822003 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:53:19.826706 kubelet[2680]: E0908 23:53:19.825985 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:53:19.837218 kubelet[2680]: I0908 23:53:19.833215 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:19.837218 kubelet[2680]: I0908 23:53:19.836723 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:19.837218 kubelet[2680]: I0908 23:53:19.837047 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:19.891055 kubelet[2680]: I0908 23:53:19.890976 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:19.939278 kubelet[2680]: I0908 23:53:19.939079 2680 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:53:19.991590 kubelet[2680]: I0908 23:53:19.991487 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:19.991590 kubelet[2680]: I0908 23:53:19.991575 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:19.991835 kubelet[2680]: I0908 23:53:19.991643 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:19.991835 kubelet[2680]: I0908 23:53:19.991672 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143e4075d0f6c7c79acc3a74244d456c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"143e4075d0f6c7c79acc3a74244d456c\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:19.991835 kubelet[2680]: I0908 23:53:19.991695 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:19.991835 kubelet[2680]: I0908 23:53:19.991732 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:19.991944 kubelet[2680]: I0908 23:53:19.991808 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:19.991944 kubelet[2680]: I0908 23:53:19.991870 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:20.094325 kubelet[2680]: E0908 23:53:20.094265 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:53:20.094573 kubelet[2680]: E0908 23:53:20.094544 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:20.094689 kubelet[2680]: E0908 23:53:20.094634 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:20.094857 kubelet[2680]: E0908 23:53:20.094708 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:53:20.094857 kubelet[2680]: E0908 23:53:20.094827 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:20.094948 kubelet[2680]: E0908 23:53:20.094874 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:20.279027 kubelet[2680]: I0908 23:53:20.278940 2680 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:53:20.279247 kubelet[2680]: I0908 23:53:20.279070 2680 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:53:20.668744 kubelet[2680]: I0908 23:53:20.668524 2680 apiserver.go:52] "Watching apiserver" Sep 8 23:53:20.689637 kubelet[2680]: I0908 23:53:20.689574 2680 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:53:20.760286 kubelet[2680]: I0908 23:53:20.760234 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:20.760473 kubelet[2680]: E0908 23:53:20.760350 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:20.760972 kubelet[2680]: E0908 23:53:20.760877 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:20.917385 kubelet[2680]: E0908 23:53:20.917286 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:53:20.917722 kubelet[2680]: E0908 23:53:20.917582 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:21.430670 sudo[2716]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:53:21.431192 sudo[2716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:53:21.762513 kubelet[2680]: E0908 23:53:21.761861 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:21.762513 kubelet[2680]: E0908 23:53:21.761861 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:21.762513 kubelet[2680]: E0908 23:53:21.762298 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:21.945534 sudo[2716]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:22.534226 kubelet[2680]: I0908 23:53:22.534178 2680 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:53:22.534587 containerd[1509]: time="2025-09-08T23:53:22.534543777Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:53:22.535115 kubelet[2680]: I0908 23:53:22.534743 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:53:23.369646 systemd[1]: Created slice kubepods-burstable-pod5ea01d2d_ea55_40dd_85b3_e04f768a9d6f.slice - libcontainer container kubepods-burstable-pod5ea01d2d_ea55_40dd_85b3_e04f768a9d6f.slice. Sep 8 23:53:23.377571 systemd[1]: Created slice kubepods-besteffort-podc7d7edf2_4ee6_4cdb_87eb_9665bb86ba94.slice - libcontainer container kubepods-besteffort-podc7d7edf2_4ee6_4cdb_87eb_9665bb86ba94.slice. Sep 8 23:53:23.411087 kubelet[2680]: I0908 23:53:23.411026 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-clustermesh-secrets\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411087 kubelet[2680]: I0908 23:53:23.411078 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndrd\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-kube-api-access-hndrd\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411118 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94-kube-proxy\") pod \"kube-proxy-gqr2v\" (UID: \"c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94\") " pod="kube-system/kube-proxy-gqr2v" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411144 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94-xtables-lock\") pod \"kube-proxy-gqr2v\" (UID: \"c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94\") " pod="kube-system/kube-proxy-gqr2v" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411164 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-etc-cni-netd\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411185 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-lib-modules\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411205 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-net\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411658 kubelet[2680]: I0908 23:53:23.411226 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-cgroup\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411245 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hubble-tls\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411265 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-run\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411288 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hostproc\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411308 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-xtables-lock\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411327 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-kernel\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.411915 kubelet[2680]: I0908 23:53:23.411350 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94-lib-modules\") pod \"kube-proxy-gqr2v\" (UID: \"c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94\") " pod="kube-system/kube-proxy-gqr2v" Sep 8 23:53:23.412103 kubelet[2680]: I0908 23:53:23.411371 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cni-path\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.412103 kubelet[2680]: I0908 23:53:23.411392 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-bpf-maps\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.412103 kubelet[2680]: I0908 23:53:23.411432 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-config-path\") pod \"cilium-dbqft\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " pod="kube-system/cilium-dbqft" Sep 8 23:53:23.412103 kubelet[2680]: I0908 23:53:23.411453 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4pl\" (UniqueName: \"kubernetes.io/projected/c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94-kube-api-access-4d4pl\") pod \"kube-proxy-gqr2v\" (UID: \"c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94\") " pod="kube-system/kube-proxy-gqr2v" Sep 8 23:53:23.511863 kubelet[2680]: I0908 23:53:23.511817 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fc6eef-6e12-410d-81f2-cf8c13c72547-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4ddrh\" (UID: \"19fc6eef-6e12-410d-81f2-cf8c13c72547\") " pod="kube-system/cilium-operator-6c4d7847fc-4ddrh" Sep 8 23:53:23.511863 kubelet[2680]: I0908 23:53:23.511862 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvqg\" (UniqueName: \"kubernetes.io/projected/19fc6eef-6e12-410d-81f2-cf8c13c72547-kube-api-access-dgvqg\") pod \"cilium-operator-6c4d7847fc-4ddrh\" (UID: \"19fc6eef-6e12-410d-81f2-cf8c13c72547\") " pod="kube-system/cilium-operator-6c4d7847fc-4ddrh" Sep 8 23:53:23.512411 systemd[1]: Created slice kubepods-besteffort-pod19fc6eef_6e12_410d_81f2_cf8c13c72547.slice - libcontainer container kubepods-besteffort-pod19fc6eef_6e12_410d_81f2_cf8c13c72547.slice. Sep 8 23:53:23.674444 kubelet[2680]: E0908 23:53:23.673964 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:23.674834 containerd[1509]: time="2025-09-08T23:53:23.674641023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbqft,Uid:5ea01d2d-ea55-40dd-85b3-e04f768a9d6f,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:23.691665 kubelet[2680]: E0908 23:53:23.691221 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:23.692112 containerd[1509]: time="2025-09-08T23:53:23.692048768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqr2v,Uid:c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:23.824799 kubelet[2680]: E0908 23:53:23.824721 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:23.825428 containerd[1509]: time="2025-09-08T23:53:23.825379855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ddrh,Uid:19fc6eef-6e12-410d-81f2-cf8c13c72547,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:24.073226 sudo[1712]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:24.075274 sshd[1711]: Connection closed by 10.0.0.1 port 40390 Sep 8 23:53:24.075851 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:24.081015 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:40390.service: Deactivated successfully. Sep 8 23:53:24.084572 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:53:24.084861 systemd[1]: session-9.scope: Consumed 5.621s CPU time, 247.6M memory peak. Sep 8 23:53:24.086302 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:53:24.087326 systemd-logind[1496]: Removed session 9. Sep 8 23:53:24.513901 containerd[1509]: time="2025-09-08T23:53:24.513762552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:24.513901 containerd[1509]: time="2025-09-08T23:53:24.513858786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:24.513901 containerd[1509]: time="2025-09-08T23:53:24.513874074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.514261 containerd[1509]: time="2025-09-08T23:53:24.513962403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.522388 containerd[1509]: time="2025-09-08T23:53:24.522012673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:24.522388 containerd[1509]: time="2025-09-08T23:53:24.522089993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:24.522388 containerd[1509]: time="2025-09-08T23:53:24.522109247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.522388 containerd[1509]: time="2025-09-08T23:53:24.522210119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.527904 containerd[1509]: time="2025-09-08T23:53:24.527111085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:24.527904 containerd[1509]: time="2025-09-08T23:53:24.527306468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:24.527904 containerd[1509]: time="2025-09-08T23:53:24.527326173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.528689 containerd[1509]: time="2025-09-08T23:53:24.528279645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:24.560814 systemd[1]: Started cri-containerd-6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3.scope - libcontainer container 6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3. Sep 8 23:53:24.569421 systemd[1]: Started cri-containerd-a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c.scope - libcontainer container a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c. Sep 8 23:53:24.572078 systemd[1]: Started cri-containerd-cca2be07e7f2b46cbd8d32d85133fbc891ec4b1f28989b595578dfb4e2b07ad0.scope - libcontainer container cca2be07e7f2b46cbd8d32d85133fbc891ec4b1f28989b595578dfb4e2b07ad0. Sep 8 23:53:24.592234 containerd[1509]: time="2025-09-08T23:53:24.592134988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbqft,Uid:5ea01d2d-ea55-40dd-85b3-e04f768a9d6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\"" Sep 8 23:53:24.593670 kubelet[2680]: E0908 23:53:24.593589 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:24.598034 containerd[1509]: time="2025-09-08T23:53:24.597998881Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:53:24.615881 containerd[1509]: time="2025-09-08T23:53:24.615831238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqr2v,Uid:c7d7edf2-4ee6-4cdb-87eb-9665bb86ba94,Namespace:kube-system,Attempt:0,} returns sandbox id \"cca2be07e7f2b46cbd8d32d85133fbc891ec4b1f28989b595578dfb4e2b07ad0\"" Sep 8 23:53:24.618781 kubelet[2680]: E0908 23:53:24.618503 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:24.622957 containerd[1509]: time="2025-09-08T23:53:24.622906537Z" level=info msg="CreateContainer within sandbox \"cca2be07e7f2b46cbd8d32d85133fbc891ec4b1f28989b595578dfb4e2b07ad0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:53:24.624160 containerd[1509]: time="2025-09-08T23:53:24.624120499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ddrh,Uid:19fc6eef-6e12-410d-81f2-cf8c13c72547,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\"" Sep 8 23:53:24.624826 kubelet[2680]: E0908 23:53:24.624783 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:24.646495 containerd[1509]: time="2025-09-08T23:53:24.646430657Z" level=info msg="CreateContainer within sandbox \"cca2be07e7f2b46cbd8d32d85133fbc891ec4b1f28989b595578dfb4e2b07ad0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cf30b967d7bb9de0528f5de1680a70b8e97185955746ee5a50b32288c4aa966\"" Sep 8 23:53:24.648641 containerd[1509]: time="2025-09-08T23:53:24.647434158Z" level=info msg="StartContainer for \"2cf30b967d7bb9de0528f5de1680a70b8e97185955746ee5a50b32288c4aa966\"" Sep 8 23:53:24.684759 systemd[1]: Started cri-containerd-2cf30b967d7bb9de0528f5de1680a70b8e97185955746ee5a50b32288c4aa966.scope - libcontainer container 2cf30b967d7bb9de0528f5de1680a70b8e97185955746ee5a50b32288c4aa966. Sep 8 23:53:24.721544 containerd[1509]: time="2025-09-08T23:53:24.721495562Z" level=info msg="StartContainer for \"2cf30b967d7bb9de0528f5de1680a70b8e97185955746ee5a50b32288c4aa966\" returns successfully" Sep 8 23:53:24.769490 kubelet[2680]: E0908 23:53:24.769202 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:24.779301 kubelet[2680]: I0908 23:53:24.778685 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqr2v" podStartSLOduration=1.778665078 podStartE2EDuration="1.778665078s" podCreationTimestamp="2025-09-08 23:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:24.778530455 +0000 UTC m=+5.354329170" watchObservedRunningTime="2025-09-08 23:53:24.778665078 +0000 UTC m=+5.354463793" Sep 8 23:53:25.636597 kubelet[2680]: E0908 23:53:25.636542 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:25.771482 kubelet[2680]: E0908 23:53:25.771437 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:26.502397 kubelet[2680]: E0908 23:53:26.502336 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:26.773465 kubelet[2680]: E0908 23:53:26.773319 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:27.775731 kubelet[2680]: E0908 23:53:27.775664 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:28.148244 kubelet[2680]: E0908 23:53:28.148078 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:28.777479 kubelet[2680]: E0908 23:53:28.777375 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:29.779123 kubelet[2680]: E0908 23:53:29.779074 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:32.010271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858687554.mount: Deactivated successfully. Sep 8 23:53:39.420143 containerd[1509]: time="2025-09-08T23:53:39.420084468Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:39.421231 containerd[1509]: time="2025-09-08T23:53:39.421195770Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 8 23:53:39.421950 containerd[1509]: time="2025-09-08T23:53:39.421919013Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:39.423340 containerd[1509]: time="2025-09-08T23:53:39.423308943Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.825266022s" Sep 8 23:53:39.423340 containerd[1509]: time="2025-09-08T23:53:39.423345820Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 8 23:53:39.430679 containerd[1509]: time="2025-09-08T23:53:39.430634306Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:53:39.439808 containerd[1509]: time="2025-09-08T23:53:39.439768384Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:53:39.453921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138012205.mount: Deactivated successfully. Sep 8 23:53:39.455963 containerd[1509]: time="2025-09-08T23:53:39.455923399Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\"" Sep 8 23:53:39.456424 containerd[1509]: time="2025-09-08T23:53:39.456397565Z" level=info msg="StartContainer for \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\"" Sep 8 23:53:39.492775 systemd[1]: Started cri-containerd-44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35.scope - libcontainer container 44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35. Sep 8 23:53:39.521392 containerd[1509]: time="2025-09-08T23:53:39.521350792Z" level=info msg="StartContainer for \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\" returns successfully" Sep 8 23:53:39.534794 systemd[1]: cri-containerd-44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35.scope: Deactivated successfully. Sep 8 23:53:39.798222 kubelet[2680]: E0908 23:53:39.798153 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:39.834948 containerd[1509]: time="2025-09-08T23:53:39.834845291Z" level=info msg="shim disconnected" id=44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35 namespace=k8s.io Sep 8 23:53:39.834948 containerd[1509]: time="2025-09-08T23:53:39.834926870Z" level=warning msg="cleaning up after shim disconnected" id=44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35 namespace=k8s.io Sep 8 23:53:39.834948 containerd[1509]: time="2025-09-08T23:53:39.834938782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:40.450909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35-rootfs.mount: Deactivated successfully. Sep 8 23:53:40.692866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165585573.mount: Deactivated successfully. Sep 8 23:53:40.809430 kubelet[2680]: E0908 23:53:40.808662 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:40.810940 containerd[1509]: time="2025-09-08T23:53:40.810864367Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:53:40.829284 containerd[1509]: time="2025-09-08T23:53:40.829130764Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\"" Sep 8 23:53:40.829848 containerd[1509]: time="2025-09-08T23:53:40.829817800Z" level=info msg="StartContainer for \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\"" Sep 8 23:53:40.862786 systemd[1]: Started cri-containerd-ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20.scope - libcontainer container ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20. Sep 8 23:53:40.910121 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:53:40.911046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:40.911282 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:53:40.917935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:53:40.918271 systemd[1]: cri-containerd-ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20.scope: Deactivated successfully. Sep 8 23:53:40.958171 containerd[1509]: time="2025-09-08T23:53:40.958035681Z" level=info msg="StartContainer for \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\" returns successfully" Sep 8 23:53:40.985705 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:41.008947 containerd[1509]: time="2025-09-08T23:53:41.008865657Z" level=info msg="shim disconnected" id=ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20 namespace=k8s.io Sep 8 23:53:41.008947 containerd[1509]: time="2025-09-08T23:53:41.008925086Z" level=warning msg="cleaning up after shim disconnected" id=ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20 namespace=k8s.io Sep 8 23:53:41.008947 containerd[1509]: time="2025-09-08T23:53:41.008936257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:41.242032 containerd[1509]: time="2025-09-08T23:53:41.241979001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:41.242760 containerd[1509]: time="2025-09-08T23:53:41.242723073Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 8 23:53:41.243840 containerd[1509]: time="2025-09-08T23:53:41.243818138Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:41.245320 containerd[1509]: time="2025-09-08T23:53:41.245289822Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.814618369s" Sep 8 23:53:41.245373 containerd[1509]: time="2025-09-08T23:53:41.245321359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 8 23:53:41.262729 containerd[1509]: time="2025-09-08T23:53:41.262694171Z" level=info msg="CreateContainer within sandbox \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:53:41.277342 containerd[1509]: time="2025-09-08T23:53:41.277293055Z" level=info msg="CreateContainer within sandbox \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\"" Sep 8 23:53:41.278583 containerd[1509]: time="2025-09-08T23:53:41.277737709Z" level=info msg="StartContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\"" Sep 8 23:53:41.310779 systemd[1]: Started cri-containerd-520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce.scope - libcontainer container 520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce. Sep 8 23:53:41.343230 containerd[1509]: time="2025-09-08T23:53:41.343184271Z" level=info msg="StartContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" returns successfully" Sep 8 23:53:41.809438 kubelet[2680]: E0908 23:53:41.809093 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:41.809438 kubelet[2680]: E0908 23:53:41.809236 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:41.810922 containerd[1509]: time="2025-09-08T23:53:41.810885067Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:53:42.007661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908330424.mount: Deactivated successfully. Sep 8 23:53:42.098377 kubelet[2680]: I0908 23:53:42.098200 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4ddrh" podStartSLOduration=2.477429356 podStartE2EDuration="19.098156748s" podCreationTimestamp="2025-09-08 23:53:23 +0000 UTC" firstStartedPulling="2025-09-08 23:53:24.625231995 +0000 UTC m=+5.201030720" lastFinishedPulling="2025-09-08 23:53:41.245959397 +0000 UTC m=+21.821758112" observedRunningTime="2025-09-08 23:53:41.820467227 +0000 UTC m=+22.396265942" watchObservedRunningTime="2025-09-08 23:53:42.098156748 +0000 UTC m=+22.673955464" Sep 8 23:53:42.103541 containerd[1509]: time="2025-09-08T23:53:42.103446978Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\"" Sep 8 23:53:42.104074 containerd[1509]: time="2025-09-08T23:53:42.104038311Z" level=info msg="StartContainer for \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\"" Sep 8 23:53:42.139782 systemd[1]: Started cri-containerd-45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac.scope - libcontainer container 45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac. Sep 8 23:53:42.176329 systemd[1]: cri-containerd-45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac.scope: Deactivated successfully. Sep 8 23:53:42.194236 containerd[1509]: time="2025-09-08T23:53:42.194165324Z" level=info msg="StartContainer for \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\" returns successfully" Sep 8 23:53:42.451229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac-rootfs.mount: Deactivated successfully. Sep 8 23:53:42.496513 containerd[1509]: time="2025-09-08T23:53:42.496417236Z" level=info msg="shim disconnected" id=45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac namespace=k8s.io Sep 8 23:53:42.496513 containerd[1509]: time="2025-09-08T23:53:42.496498285Z" level=warning msg="cleaning up after shim disconnected" id=45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac namespace=k8s.io Sep 8 23:53:42.496513 containerd[1509]: time="2025-09-08T23:53:42.496511408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:42.814008 kubelet[2680]: E0908 23:53:42.813925 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:42.814008 kubelet[2680]: E0908 23:53:42.813990 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:42.816232 containerd[1509]: time="2025-09-08T23:53:42.816060344Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:53:42.838561 containerd[1509]: time="2025-09-08T23:53:42.838506497Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\"" Sep 8 23:53:42.838992 containerd[1509]: time="2025-09-08T23:53:42.838968874Z" level=info msg="StartContainer for \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\"" Sep 8 23:53:42.884086 systemd[1]: Started cri-containerd-264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b.scope - libcontainer container 264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b. Sep 8 23:53:42.918703 systemd[1]: cri-containerd-264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b.scope: Deactivated successfully. Sep 8 23:53:42.921966 containerd[1509]: time="2025-09-08T23:53:42.921925332Z" level=info msg="StartContainer for \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\" returns successfully" Sep 8 23:53:42.956770 containerd[1509]: time="2025-09-08T23:53:42.956696351Z" level=info msg="shim disconnected" id=264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b namespace=k8s.io Sep 8 23:53:42.956770 containerd[1509]: time="2025-09-08T23:53:42.956759487Z" level=warning msg="cleaning up after shim disconnected" id=264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b namespace=k8s.io Sep 8 23:53:42.956770 containerd[1509]: time="2025-09-08T23:53:42.956772301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:53:43.453477 systemd[1]: run-containerd-runc-k8s.io-264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b-runc.r84oB3.mount: Deactivated successfully. Sep 8 23:53:43.453641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b-rootfs.mount: Deactivated successfully. Sep 8 23:53:43.818003 kubelet[2680]: E0908 23:53:43.817751 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:43.820397 containerd[1509]: time="2025-09-08T23:53:43.820339050Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:53:43.841106 containerd[1509]: time="2025-09-08T23:53:43.841060837Z" level=info msg="CreateContainer within sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\"" Sep 8 23:53:43.841757 containerd[1509]: time="2025-09-08T23:53:43.841721037Z" level=info msg="StartContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\"" Sep 8 23:53:43.879922 systemd[1]: Started cri-containerd-e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4.scope - libcontainer container e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4. Sep 8 23:53:43.911976 containerd[1509]: time="2025-09-08T23:53:43.911915185Z" level=info msg="StartContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" returns successfully" Sep 8 23:53:44.116980 kubelet[2680]: I0908 23:53:44.116838 2680 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:53:44.156506 systemd[1]: Created slice kubepods-burstable-pode0cd82a8_25d1_49d3_a78b_a6425bc6a405.slice - libcontainer container kubepods-burstable-pode0cd82a8_25d1_49d3_a78b_a6425bc6a405.slice. Sep 8 23:53:44.166181 systemd[1]: Created slice kubepods-burstable-pode8104ac8_ef8b_4def_9d2f_4e97cf6f2204.slice - libcontainer container kubepods-burstable-pode8104ac8_ef8b_4def_9d2f_4e97cf6f2204.slice. Sep 8 23:53:44.253100 kubelet[2680]: I0908 23:53:44.253054 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0cd82a8-25d1-49d3-a78b-a6425bc6a405-config-volume\") pod \"coredns-668d6bf9bc-fsdc6\" (UID: \"e0cd82a8-25d1-49d3-a78b-a6425bc6a405\") " pod="kube-system/coredns-668d6bf9bc-fsdc6" Sep 8 23:53:44.253100 kubelet[2680]: I0908 23:53:44.253098 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb5tv\" (UniqueName: \"kubernetes.io/projected/e0cd82a8-25d1-49d3-a78b-a6425bc6a405-kube-api-access-fb5tv\") pod \"coredns-668d6bf9bc-fsdc6\" (UID: \"e0cd82a8-25d1-49d3-a78b-a6425bc6a405\") " pod="kube-system/coredns-668d6bf9bc-fsdc6" Sep 8 23:53:44.253275 kubelet[2680]: I0908 23:53:44.253119 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8104ac8-ef8b-4def-9d2f-4e97cf6f2204-config-volume\") pod \"coredns-668d6bf9bc-25mbz\" (UID: \"e8104ac8-ef8b-4def-9d2f-4e97cf6f2204\") " pod="kube-system/coredns-668d6bf9bc-25mbz" Sep 8 23:53:44.253275 kubelet[2680]: I0908 23:53:44.253136 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhtn\" (UniqueName: \"kubernetes.io/projected/e8104ac8-ef8b-4def-9d2f-4e97cf6f2204-kube-api-access-mvhtn\") pod \"coredns-668d6bf9bc-25mbz\" (UID: \"e8104ac8-ef8b-4def-9d2f-4e97cf6f2204\") " pod="kube-system/coredns-668d6bf9bc-25mbz" Sep 8 23:53:44.469228 kubelet[2680]: E0908 23:53:44.468741 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:44.469228 kubelet[2680]: E0908 23:53:44.468741 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:44.474972 containerd[1509]: time="2025-09-08T23:53:44.470201053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsdc6,Uid:e0cd82a8-25d1-49d3-a78b-a6425bc6a405,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:44.475593 containerd[1509]: time="2025-09-08T23:53:44.475533933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25mbz,Uid:e8104ac8-ef8b-4def-9d2f-4e97cf6f2204,Namespace:kube-system,Attempt:0,}" Sep 8 23:53:44.828647 kubelet[2680]: E0908 23:53:44.827761 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:44.903377 kubelet[2680]: I0908 23:53:44.903208 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbqft" podStartSLOduration=7.0699929 podStartE2EDuration="21.903177522s" podCreationTimestamp="2025-09-08 23:53:23 +0000 UTC" firstStartedPulling="2025-09-08 23:53:24.597278631 +0000 UTC m=+5.173077346" lastFinishedPulling="2025-09-08 23:53:39.430463253 +0000 UTC m=+20.006261968" observedRunningTime="2025-09-08 23:53:44.902783591 +0000 UTC m=+25.478582316" watchObservedRunningTime="2025-09-08 23:53:44.903177522 +0000 UTC m=+25.478976237" Sep 8 23:53:45.835557 kubelet[2680]: E0908 23:53:45.833511 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:46.838145 kubelet[2680]: E0908 23:53:46.838068 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:46.851321 systemd-networkd[1432]: cilium_host: Link UP Sep 8 23:53:46.851582 systemd-networkd[1432]: cilium_net: Link UP Sep 8 23:53:46.851844 systemd-networkd[1432]: cilium_net: Gained carrier Sep 8 23:53:46.852033 systemd-networkd[1432]: cilium_host: Gained carrier Sep 8 23:53:46.852197 systemd-networkd[1432]: cilium_net: Gained IPv6LL Sep 8 23:53:46.852516 systemd-networkd[1432]: cilium_host: Gained IPv6LL Sep 8 23:53:47.011327 systemd-networkd[1432]: cilium_vxlan: Link UP Sep 8 23:53:47.011339 systemd-networkd[1432]: cilium_vxlan: Gained carrier Sep 8 23:53:47.253648 kernel: NET: Registered PF_ALG protocol family Sep 8 23:53:47.945088 systemd-networkd[1432]: lxc_health: Link UP Sep 8 23:53:47.951589 systemd-networkd[1432]: lxc_health: Gained carrier Sep 8 23:53:48.171644 kernel: eth0: renamed from tmp615ea Sep 8 23:53:48.180678 systemd-networkd[1432]: lxc8b2b42d930bc: Link UP Sep 8 23:53:48.184014 systemd-networkd[1432]: lxc8b2b42d930bc: Gained carrier Sep 8 23:53:48.261699 kernel: eth0: renamed from tmp6bb09 Sep 8 23:53:48.272892 systemd-networkd[1432]: lxc2d71089a534a: Link UP Sep 8 23:53:48.273243 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Sep 8 23:53:48.275818 systemd-networkd[1432]: lxc2d71089a534a: Gained carrier Sep 8 23:53:49.254846 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 8 23:53:49.382795 systemd-networkd[1432]: lxc2d71089a534a: Gained IPv6LL Sep 8 23:53:49.676847 kubelet[2680]: E0908 23:53:49.676073 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:49.856125 kubelet[2680]: E0908 23:53:49.856022 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:49.897429 systemd-networkd[1432]: lxc8b2b42d930bc: Gained IPv6LL Sep 8 23:53:50.854043 kubelet[2680]: E0908 23:53:50.853762 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:52.804564 containerd[1509]: time="2025-09-08T23:53:52.804411861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:52.804564 containerd[1509]: time="2025-09-08T23:53:52.804504421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:52.804564 containerd[1509]: time="2025-09-08T23:53:52.804519849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:52.805959 containerd[1509]: time="2025-09-08T23:53:52.805590210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:52.822690 containerd[1509]: time="2025-09-08T23:53:52.817352205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:53:52.822690 containerd[1509]: time="2025-09-08T23:53:52.817438304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:53:52.822690 containerd[1509]: time="2025-09-08T23:53:52.817461025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:52.822690 containerd[1509]: time="2025-09-08T23:53:52.818196520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:53:52.828722 systemd[1]: run-containerd-runc-k8s.io-6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798-runc.Wclqqk.mount: Deactivated successfully. Sep 8 23:53:52.863928 systemd[1]: Started cri-containerd-615eac11ee413695a882f1bb1cbc26b07e1f61ea254b553370b77f0a33fb47f6.scope - libcontainer container 615eac11ee413695a882f1bb1cbc26b07e1f61ea254b553370b77f0a33fb47f6. Sep 8 23:53:52.867042 systemd[1]: Started cri-containerd-6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798.scope - libcontainer container 6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798. Sep 8 23:53:52.870009 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:55208.service - OpenSSH per-connection server daemon (10.0.0.1:55208). Sep 8 23:53:52.884569 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:53:52.886529 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:53:52.921127 containerd[1509]: time="2025-09-08T23:53:52.921040706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fsdc6,Uid:e0cd82a8-25d1-49d3-a78b-a6425bc6a405,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798\"" Sep 8 23:53:52.921750 containerd[1509]: time="2025-09-08T23:53:52.921711401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-25mbz,Uid:e8104ac8-ef8b-4def-9d2f-4e97cf6f2204,Namespace:kube-system,Attempt:0,} returns sandbox id \"615eac11ee413695a882f1bb1cbc26b07e1f61ea254b553370b77f0a33fb47f6\"" Sep 8 23:53:52.922941 kubelet[2680]: E0908 23:53:52.922587 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:52.923329 kubelet[2680]: E0908 23:53:52.922854 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:52.926782 containerd[1509]: time="2025-09-08T23:53:52.926731012Z" level=info msg="CreateContainer within sandbox \"6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:53:52.926905 containerd[1509]: time="2025-09-08T23:53:52.926741932Z" level=info msg="CreateContainer within sandbox \"615eac11ee413695a882f1bb1cbc26b07e1f61ea254b553370b77f0a33fb47f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:53:52.931249 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 55208 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:53:52.933580 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:52.938799 systemd-logind[1496]: New session 10 of user core. Sep 8 23:53:52.947800 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:53:53.135596 sshd[3987]: Connection closed by 10.0.0.1 port 55208 Sep 8 23:53:53.136865 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:53.140845 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:55208.service: Deactivated successfully. Sep 8 23:53:53.143007 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:53:53.143673 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:53:53.144665 systemd-logind[1496]: Removed session 10. Sep 8 23:53:53.278224 containerd[1509]: time="2025-09-08T23:53:53.278152952Z" level=info msg="CreateContainer within sandbox \"6bb09d04757bd25c322c344562790867470abe08bbb5caa14d0c792898a47798\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df6bc87d2e67fdeafa8bfa513a88ba124d4053f0f3f72957fe0d62d45fa0f3b4\"" Sep 8 23:53:53.279168 containerd[1509]: time="2025-09-08T23:53:53.278891262Z" level=info msg="StartContainer for \"df6bc87d2e67fdeafa8bfa513a88ba124d4053f0f3f72957fe0d62d45fa0f3b4\"" Sep 8 23:53:53.281281 containerd[1509]: time="2025-09-08T23:53:53.281223979Z" level=info msg="CreateContainer within sandbox \"615eac11ee413695a882f1bb1cbc26b07e1f61ea254b553370b77f0a33fb47f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5791837d6a60b9b8f47638ea28a2d2557e9bdfea008a7d897d62747eb01573c\"" Sep 8 23:53:53.281873 containerd[1509]: time="2025-09-08T23:53:53.281841226Z" level=info msg="StartContainer for \"e5791837d6a60b9b8f47638ea28a2d2557e9bdfea008a7d897d62747eb01573c\"" Sep 8 23:53:53.316852 systemd[1]: Started cri-containerd-df6bc87d2e67fdeafa8bfa513a88ba124d4053f0f3f72957fe0d62d45fa0f3b4.scope - libcontainer container df6bc87d2e67fdeafa8bfa513a88ba124d4053f0f3f72957fe0d62d45fa0f3b4. Sep 8 23:53:53.320822 systemd[1]: Started cri-containerd-e5791837d6a60b9b8f47638ea28a2d2557e9bdfea008a7d897d62747eb01573c.scope - libcontainer container e5791837d6a60b9b8f47638ea28a2d2557e9bdfea008a7d897d62747eb01573c. Sep 8 23:53:53.354486 containerd[1509]: time="2025-09-08T23:53:53.354443662Z" level=info msg="StartContainer for \"df6bc87d2e67fdeafa8bfa513a88ba124d4053f0f3f72957fe0d62d45fa0f3b4\" returns successfully" Sep 8 23:53:53.361045 containerd[1509]: time="2025-09-08T23:53:53.361002372Z" level=info msg="StartContainer for \"e5791837d6a60b9b8f47638ea28a2d2557e9bdfea008a7d897d62747eb01573c\" returns successfully" Sep 8 23:53:53.813039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989370726.mount: Deactivated successfully. Sep 8 23:53:53.864172 kubelet[2680]: E0908 23:53:53.863112 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:53.868351 kubelet[2680]: E0908 23:53:53.868312 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:53.881227 kubelet[2680]: I0908 23:53:53.881069 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-25mbz" podStartSLOduration=30.880988738 podStartE2EDuration="30.880988738s" podCreationTimestamp="2025-09-08 23:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:53.88050686 +0000 UTC m=+34.456305595" watchObservedRunningTime="2025-09-08 23:53:53.880988738 +0000 UTC m=+34.456787463" Sep 8 23:53:53.911923 kubelet[2680]: I0908 23:53:53.910550 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fsdc6" podStartSLOduration=30.910526874 podStartE2EDuration="30.910526874s" podCreationTimestamp="2025-09-08 23:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:53:53.896846286 +0000 UTC m=+34.472645011" watchObservedRunningTime="2025-09-08 23:53:53.910526874 +0000 UTC m=+34.486325589" Sep 8 23:53:54.868574 kubelet[2680]: E0908 23:53:54.868532 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:54.869100 kubelet[2680]: E0908 23:53:54.868662 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:55.870453 kubelet[2680]: E0908 23:53:55.870416 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:53:58.150241 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:55224.service - OpenSSH per-connection server daemon (10.0.0.1:55224). Sep 8 23:53:58.198397 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 55224 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:53:58.200273 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:58.205175 systemd-logind[1496]: New session 11 of user core. Sep 8 23:53:58.212761 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:53:58.356902 sshd[4090]: Connection closed by 10.0.0.1 port 55224 Sep 8 23:53:58.357296 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:58.361504 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:55224.service: Deactivated successfully. Sep 8 23:53:58.364359 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:53:58.365466 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:53:58.366524 systemd-logind[1496]: Removed session 11. Sep 8 23:54:03.371887 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:39784.service - OpenSSH per-connection server daemon (10.0.0.1:39784). Sep 8 23:54:03.415535 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 39784 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:03.417830 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:03.422365 systemd-logind[1496]: New session 12 of user core. Sep 8 23:54:03.431759 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:54:03.550902 sshd[4108]: Connection closed by 10.0.0.1 port 39784 Sep 8 23:54:03.551369 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:03.556044 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:39784.service: Deactivated successfully. Sep 8 23:54:03.559245 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:54:03.560182 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:54:03.561341 systemd-logind[1496]: Removed session 12. Sep 8 23:54:08.563832 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:39794.service - OpenSSH per-connection server daemon (10.0.0.1:39794). Sep 8 23:54:08.606373 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 39794 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:08.646758 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:08.651167 systemd-logind[1496]: New session 13 of user core. Sep 8 23:54:08.666744 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:54:09.011970 sshd[4125]: Connection closed by 10.0.0.1 port 39794 Sep 8 23:54:09.012404 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:09.018002 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:39794.service: Deactivated successfully. Sep 8 23:54:09.020756 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:54:09.021448 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:54:09.022381 systemd-logind[1496]: Removed session 13. Sep 8 23:54:14.026996 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:43734.service - OpenSSH per-connection server daemon (10.0.0.1:43734). Sep 8 23:54:14.079165 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 43734 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:14.081433 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:14.086820 systemd-logind[1496]: New session 14 of user core. Sep 8 23:54:14.092808 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:54:14.230411 sshd[4142]: Connection closed by 10.0.0.1 port 43734 Sep 8 23:54:14.230894 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:14.236021 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:43734.service: Deactivated successfully. Sep 8 23:54:14.238751 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:54:14.239479 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:54:14.240637 systemd-logind[1496]: Removed session 14. Sep 8 23:54:19.263954 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:43742.service - OpenSSH per-connection server daemon (10.0.0.1:43742). Sep 8 23:54:19.306839 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 43742 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:19.309061 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:19.314538 systemd-logind[1496]: New session 15 of user core. Sep 8 23:54:19.324762 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:54:19.461100 sshd[4159]: Connection closed by 10.0.0.1 port 43742 Sep 8 23:54:19.461538 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:19.475908 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:43742.service: Deactivated successfully. Sep 8 23:54:19.478255 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:54:19.480358 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:54:19.491130 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:43756.service - OpenSSH per-connection server daemon (10.0.0.1:43756). Sep 8 23:54:19.492455 systemd-logind[1496]: Removed session 15. Sep 8 23:54:19.534745 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 43756 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:19.536553 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:19.541643 systemd-logind[1496]: New session 16 of user core. Sep 8 23:54:19.549753 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:54:19.723711 sshd[4175]: Connection closed by 10.0.0.1 port 43756 Sep 8 23:54:19.724255 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:19.740488 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:43756.service: Deactivated successfully. Sep 8 23:54:19.745043 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:54:19.747779 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:54:19.761179 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Sep 8 23:54:19.763078 systemd-logind[1496]: Removed session 16. Sep 8 23:54:19.805944 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:19.807709 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:19.813026 systemd-logind[1496]: New session 17 of user core. Sep 8 23:54:19.823051 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:54:19.943188 sshd[4191]: Connection closed by 10.0.0.1 port 43766 Sep 8 23:54:19.943645 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:19.948152 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:43766.service: Deactivated successfully. Sep 8 23:54:19.951401 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:54:19.952324 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:54:19.953271 systemd-logind[1496]: Removed session 17. Sep 8 23:54:24.959906 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:34250.service - OpenSSH per-connection server daemon (10.0.0.1:34250). Sep 8 23:54:25.001458 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 34250 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:25.003083 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:25.007252 systemd-logind[1496]: New session 18 of user core. Sep 8 23:54:25.024778 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:54:25.134861 sshd[4208]: Connection closed by 10.0.0.1 port 34250 Sep 8 23:54:25.135269 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:25.139380 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:34250.service: Deactivated successfully. Sep 8 23:54:25.141767 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:54:25.142583 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:54:25.143627 systemd-logind[1496]: Removed session 18. Sep 8 23:54:27.740076 kubelet[2680]: E0908 23:54:27.733766 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:28.732236 kubelet[2680]: E0908 23:54:28.732176 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:30.148301 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:56086.service - OpenSSH per-connection server daemon (10.0.0.1:56086). Sep 8 23:54:30.213406 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 56086 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:30.215174 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:30.220617 systemd-logind[1496]: New session 19 of user core. Sep 8 23:54:30.230781 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:54:30.373666 sshd[4224]: Connection closed by 10.0.0.1 port 56086 Sep 8 23:54:30.374133 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:30.378845 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:56086.service: Deactivated successfully. Sep 8 23:54:30.381116 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:54:30.381899 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:54:30.383534 systemd-logind[1496]: Removed session 19. Sep 8 23:54:33.732002 kubelet[2680]: E0908 23:54:33.731955 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:35.389921 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:56102.service - OpenSSH per-connection server daemon (10.0.0.1:56102). Sep 8 23:54:35.438987 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 56102 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:35.440831 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:35.445570 systemd-logind[1496]: New session 20 of user core. Sep 8 23:54:35.452765 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:54:35.566544 sshd[4240]: Connection closed by 10.0.0.1 port 56102 Sep 8 23:54:35.567043 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:35.580518 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:56102.service: Deactivated successfully. Sep 8 23:54:35.582503 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:54:35.584365 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:54:35.595948 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:56116.service - OpenSSH per-connection server daemon (10.0.0.1:56116). Sep 8 23:54:35.596932 systemd-logind[1496]: Removed session 20. Sep 8 23:54:35.633587 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 56116 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:35.635496 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:35.640308 systemd-logind[1496]: New session 21 of user core. Sep 8 23:54:35.649784 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:54:36.567053 sshd[4256]: Connection closed by 10.0.0.1 port 56116 Sep 8 23:54:36.567427 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:36.584017 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:56116.service: Deactivated successfully. Sep 8 23:54:36.586428 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:54:36.588259 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:54:36.593881 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Sep 8 23:54:36.594896 systemd-logind[1496]: Removed session 21. Sep 8 23:54:36.637102 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:36.638627 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:36.643506 systemd-logind[1496]: New session 22 of user core. Sep 8 23:54:36.650736 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:54:37.340013 sshd[4269]: Connection closed by 10.0.0.1 port 56118 Sep 8 23:54:37.340777 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:37.357171 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:56118.service: Deactivated successfully. Sep 8 23:54:37.360819 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:54:37.363502 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:54:37.379972 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:56124.service - OpenSSH per-connection server daemon (10.0.0.1:56124). Sep 8 23:54:37.380835 systemd-logind[1496]: Removed session 22. Sep 8 23:54:37.422369 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 56124 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:37.424435 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:37.432363 systemd-logind[1496]: New session 23 of user core. Sep 8 23:54:37.440753 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:54:37.782232 sshd[4289]: Connection closed by 10.0.0.1 port 56124 Sep 8 23:54:37.785105 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:37.805834 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:56124.service: Deactivated successfully. Sep 8 23:54:37.809639 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:54:37.810987 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:54:37.824167 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:56130.service - OpenSSH per-connection server daemon (10.0.0.1:56130). Sep 8 23:54:37.826366 systemd-logind[1496]: Removed session 23. Sep 8 23:54:37.875291 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 56130 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:37.878135 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:37.887779 systemd-logind[1496]: New session 24 of user core. Sep 8 23:54:37.900974 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:54:38.017665 sshd[4303]: Connection closed by 10.0.0.1 port 56130 Sep 8 23:54:38.018200 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:38.023151 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:56130.service: Deactivated successfully. Sep 8 23:54:38.025840 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:54:38.026689 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:54:38.027977 systemd-logind[1496]: Removed session 24. Sep 8 23:54:40.732688 kubelet[2680]: E0908 23:54:40.732593 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:43.063740 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:46660.service - OpenSSH per-connection server daemon (10.0.0.1:46660). Sep 8 23:54:43.130680 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 46660 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:43.138233 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:43.152742 systemd-logind[1496]: New session 25 of user core. Sep 8 23:54:43.166515 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:54:43.399340 sshd[4319]: Connection closed by 10.0.0.1 port 46660 Sep 8 23:54:43.400421 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:43.407850 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:46660.service: Deactivated successfully. Sep 8 23:54:43.419240 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:54:43.424753 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:54:43.432384 systemd-logind[1496]: Removed session 25. Sep 8 23:54:48.413431 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Sep 8 23:54:48.475918 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:48.478147 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:48.483345 systemd-logind[1496]: New session 26 of user core. Sep 8 23:54:48.492779 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:54:48.613993 sshd[4336]: Connection closed by 10.0.0.1 port 46670 Sep 8 23:54:48.614404 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.619186 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:46670.service: Deactivated successfully. Sep 8 23:54:48.621490 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:54:48.622265 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:54:48.623335 systemd-logind[1496]: Removed session 26. Sep 8 23:54:53.631822 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:51988.service - OpenSSH per-connection server daemon (10.0.0.1:51988). Sep 8 23:54:53.675437 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 51988 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:53.677295 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:53.681823 systemd-logind[1496]: New session 27 of user core. Sep 8 23:54:53.693747 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 8 23:54:53.810315 sshd[4351]: Connection closed by 10.0.0.1 port 51988 Sep 8 23:54:53.810780 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:53.815013 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:51988.service: Deactivated successfully. Sep 8 23:54:53.817647 systemd[1]: session-27.scope: Deactivated successfully. Sep 8 23:54:53.818506 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Sep 8 23:54:53.819592 systemd-logind[1496]: Removed session 27. Sep 8 23:54:58.732413 kubelet[2680]: E0908 23:54:58.732356 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:58.824169 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:52000.service - OpenSSH per-connection server daemon (10.0.0.1:52000). Sep 8 23:54:58.868466 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 52000 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:58.870035 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:58.874877 systemd-logind[1496]: New session 28 of user core. Sep 8 23:54:58.883043 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 8 23:54:59.006754 sshd[4370]: Connection closed by 10.0.0.1 port 52000 Sep 8 23:54:59.007854 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:59.019091 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:52000.service: Deactivated successfully. Sep 8 23:54:59.021569 systemd[1]: session-28.scope: Deactivated successfully. Sep 8 23:54:59.022520 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. Sep 8 23:54:59.035040 systemd[1]: Started sshd@28-10.0.0.55:22-10.0.0.1:52014.service - OpenSSH per-connection server daemon (10.0.0.1:52014). Sep 8 23:54:59.036007 systemd-logind[1496]: Removed session 28. Sep 8 23:54:59.072891 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 52014 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:54:59.074906 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:59.080977 systemd-logind[1496]: New session 29 of user core. Sep 8 23:54:59.091033 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 8 23:55:00.453117 containerd[1509]: time="2025-09-08T23:55:00.453038703Z" level=info msg="StopContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" with timeout 30 (s)" Sep 8 23:55:00.467376 containerd[1509]: time="2025-09-08T23:55:00.467215521Z" level=info msg="Stop container \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" with signal terminated" Sep 8 23:55:00.493076 systemd[1]: run-containerd-runc-k8s.io-e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4-runc.t7JajY.mount: Deactivated successfully. Sep 8 23:55:00.495983 systemd[1]: cri-containerd-520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce.scope: Deactivated successfully. Sep 8 23:55:00.523539 containerd[1509]: time="2025-09-08T23:55:00.523466090Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:55:00.527039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce-rootfs.mount: Deactivated successfully. Sep 8 23:55:00.528962 containerd[1509]: time="2025-09-08T23:55:00.528916276Z" level=info msg="StopContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" with timeout 2 (s)" Sep 8 23:55:00.529274 containerd[1509]: time="2025-09-08T23:55:00.529214412Z" level=info msg="Stop container \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" with signal terminated" Sep 8 23:55:00.536919 containerd[1509]: time="2025-09-08T23:55:00.536839431Z" level=info msg="shim disconnected" id=520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce namespace=k8s.io Sep 8 23:55:00.536919 containerd[1509]: time="2025-09-08T23:55:00.536911646Z" level=warning msg="cleaning up after shim disconnected" id=520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce namespace=k8s.io Sep 8 23:55:00.536919 containerd[1509]: time="2025-09-08T23:55:00.536925381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:00.539686 systemd-networkd[1432]: lxc_health: Link DOWN Sep 8 23:55:00.539694 systemd-networkd[1432]: lxc_health: Lost carrier Sep 8 23:55:00.555845 systemd[1]: cri-containerd-e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4.scope: Deactivated successfully. Sep 8 23:55:00.556384 systemd[1]: cri-containerd-e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4.scope: Consumed 8.338s CPU time, 127.2M memory peak, 416K read from disk, 13.3M written to disk. Sep 8 23:55:00.563971 containerd[1509]: time="2025-09-08T23:55:00.563912910Z" level=info msg="StopContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" returns successfully" Sep 8 23:55:00.572334 containerd[1509]: time="2025-09-08T23:55:00.572265255Z" level=info msg="StopPodSandbox for \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\"" Sep 8 23:55:00.587948 containerd[1509]: time="2025-09-08T23:55:00.572336798Z" level=info msg="Container to stop \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.593117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c-shm.mount: Deactivated successfully. Sep 8 23:55:00.595878 systemd[1]: cri-containerd-a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c.scope: Deactivated successfully. Sep 8 23:55:00.600967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4-rootfs.mount: Deactivated successfully. Sep 8 23:55:00.614536 containerd[1509]: time="2025-09-08T23:55:00.614385674Z" level=info msg="shim disconnected" id=e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4 namespace=k8s.io Sep 8 23:55:00.614536 containerd[1509]: time="2025-09-08T23:55:00.614457337Z" level=warning msg="cleaning up after shim disconnected" id=e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4 namespace=k8s.io Sep 8 23:55:00.614536 containerd[1509]: time="2025-09-08T23:55:00.614476363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:00.628651 containerd[1509]: time="2025-09-08T23:55:00.628533057Z" level=info msg="shim disconnected" id=a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c namespace=k8s.io Sep 8 23:55:00.628651 containerd[1509]: time="2025-09-08T23:55:00.628597146Z" level=warning msg="cleaning up after shim disconnected" id=a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c namespace=k8s.io Sep 8 23:55:00.628986 containerd[1509]: time="2025-09-08T23:55:00.628808489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:00.641627 containerd[1509]: time="2025-09-08T23:55:00.641570631Z" level=info msg="StopContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" returns successfully" Sep 8 23:55:00.642354 containerd[1509]: time="2025-09-08T23:55:00.642278110Z" level=info msg="StopPodSandbox for \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\"" Sep 8 23:55:00.642405 containerd[1509]: time="2025-09-08T23:55:00.642329515Z" level=info msg="Container to stop \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.642405 containerd[1509]: time="2025-09-08T23:55:00.642375220Z" level=info msg="Container to stop \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.642405 containerd[1509]: time="2025-09-08T23:55:00.642386621Z" level=info msg="Container to stop \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.642405 containerd[1509]: time="2025-09-08T23:55:00.642397612Z" level=info msg="Container to stop \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.642547 containerd[1509]: time="2025-09-08T23:55:00.642407551Z" level=info msg="Container to stop \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:00.650248 systemd[1]: cri-containerd-6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3.scope: Deactivated successfully. Sep 8 23:55:00.652510 containerd[1509]: time="2025-09-08T23:55:00.652464962Z" level=info msg="TearDown network for sandbox \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\" successfully" Sep 8 23:55:00.653028 containerd[1509]: time="2025-09-08T23:55:00.652863294Z" level=info msg="StopPodSandbox for \"a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c\" returns successfully" Sep 8 23:55:00.677542 containerd[1509]: time="2025-09-08T23:55:00.676587864Z" level=info msg="shim disconnected" id=6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3 namespace=k8s.io Sep 8 23:55:00.677542 containerd[1509]: time="2025-09-08T23:55:00.677539307Z" level=warning msg="cleaning up after shim disconnected" id=6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3 namespace=k8s.io Sep 8 23:55:00.677827 containerd[1509]: time="2025-09-08T23:55:00.677557039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:00.693712 containerd[1509]: time="2025-09-08T23:55:00.693661439Z" level=info msg="TearDown network for sandbox \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" successfully" Sep 8 23:55:00.693712 containerd[1509]: time="2025-09-08T23:55:00.693699320Z" level=info msg="StopPodSandbox for \"6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3\" returns successfully" Sep 8 23:55:00.708248 kubelet[2680]: I0908 23:55:00.708105 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgvqg\" (UniqueName: \"kubernetes.io/projected/19fc6eef-6e12-410d-81f2-cf8c13c72547-kube-api-access-dgvqg\") pod \"19fc6eef-6e12-410d-81f2-cf8c13c72547\" (UID: \"19fc6eef-6e12-410d-81f2-cf8c13c72547\") " Sep 8 23:55:00.708248 kubelet[2680]: I0908 23:55:00.708147 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fc6eef-6e12-410d-81f2-cf8c13c72547-cilium-config-path\") pod \"19fc6eef-6e12-410d-81f2-cf8c13c72547\" (UID: \"19fc6eef-6e12-410d-81f2-cf8c13c72547\") " Sep 8 23:55:00.712197 kubelet[2680]: I0908 23:55:00.712153 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc6eef-6e12-410d-81f2-cf8c13c72547-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19fc6eef-6e12-410d-81f2-cf8c13c72547" (UID: "19fc6eef-6e12-410d-81f2-cf8c13c72547"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:55:00.713295 kubelet[2680]: I0908 23:55:00.713242 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc6eef-6e12-410d-81f2-cf8c13c72547-kube-api-access-dgvqg" (OuterVolumeSpecName: "kube-api-access-dgvqg") pod "19fc6eef-6e12-410d-81f2-cf8c13c72547" (UID: "19fc6eef-6e12-410d-81f2-cf8c13c72547"). InnerVolumeSpecName "kube-api-access-dgvqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:00.808413 kubelet[2680]: I0908 23:55:00.808352 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hndrd\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-kube-api-access-hndrd\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808413 kubelet[2680]: I0908 23:55:00.808400 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-etc-cni-netd\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808413 kubelet[2680]: I0908 23:55:00.808419 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hubble-tls\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808413 kubelet[2680]: I0908 23:55:00.808434 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-cgroup\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808448 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-kernel\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808467 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-clustermesh-secrets\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808481 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-bpf-maps\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808498 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hostproc\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808523 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-net\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.808780 kubelet[2680]: I0908 23:55:00.808541 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-config-path\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808561 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-lib-modules\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808545 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808578 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cni-path\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808645 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-run\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808652 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809346 kubelet[2680]: I0908 23:55:00.808665 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-xtables-lock\") pod \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\" (UID: \"5ea01d2d-ea55-40dd-85b3-e04f768a9d6f\") " Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808680 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808734 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808746 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fc6eef-6e12-410d-81f2-cf8c13c72547-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808766 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808783 2680 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808796 2680 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.809699 kubelet[2680]: I0908 23:55:00.808811 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgvqg\" (UniqueName: \"kubernetes.io/projected/19fc6eef-6e12-410d-81f2-cf8c13c72547-kube-api-access-dgvqg\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.809942 kubelet[2680]: I0908 23:55:00.808767 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809942 kubelet[2680]: I0908 23:55:00.808850 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809942 kubelet[2680]: I0908 23:55:00.808986 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809942 kubelet[2680]: I0908 23:55:00.809043 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.809942 kubelet[2680]: I0908 23:55:00.809070 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.810123 kubelet[2680]: I0908 23:55:00.809093 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:00.811887 kubelet[2680]: I0908 23:55:00.811851 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-kube-api-access-hndrd" (OuterVolumeSpecName: "kube-api-access-hndrd") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "kube-api-access-hndrd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:00.812484 kubelet[2680]: I0908 23:55:00.812460 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:55:00.812879 kubelet[2680]: I0908 23:55:00.812819 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:00.813220 kubelet[2680]: I0908 23:55:00.813194 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" (UID: "5ea01d2d-ea55-40dd-85b3-e04f768a9d6f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:55:00.909814 kubelet[2680]: I0908 23:55:00.909752 2680 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.909814 kubelet[2680]: I0908 23:55:00.909792 2680 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.909814 kubelet[2680]: I0908 23:55:00.909808 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.909814 kubelet[2680]: I0908 23:55:00.909819 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.909814 kubelet[2680]: I0908 23:55:00.909829 2680 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909839 2680 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909852 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909862 2680 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909873 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909883 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hndrd\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-kube-api-access-hndrd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:00.910108 kubelet[2680]: I0908 23:55:00.909894 2680 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:01.011689 kubelet[2680]: I0908 23:55:01.011535 2680 scope.go:117] "RemoveContainer" containerID="520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce" Sep 8 23:55:01.018508 containerd[1509]: time="2025-09-08T23:55:01.018364866Z" level=info msg="RemoveContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\"" Sep 8 23:55:01.019949 systemd[1]: Removed slice kubepods-besteffort-pod19fc6eef_6e12_410d_81f2_cf8c13c72547.slice - libcontainer container kubepods-besteffort-pod19fc6eef_6e12_410d_81f2_cf8c13c72547.slice. Sep 8 23:55:01.021422 systemd[1]: Removed slice kubepods-burstable-pod5ea01d2d_ea55_40dd_85b3_e04f768a9d6f.slice - libcontainer container kubepods-burstable-pod5ea01d2d_ea55_40dd_85b3_e04f768a9d6f.slice. Sep 8 23:55:01.021547 systemd[1]: kubepods-burstable-pod5ea01d2d_ea55_40dd_85b3_e04f768a9d6f.slice: Consumed 8.459s CPU time, 127.5M memory peak, 444K read from disk, 13.3M written to disk. Sep 8 23:55:01.023457 containerd[1509]: time="2025-09-08T23:55:01.023407794Z" level=info msg="RemoveContainer for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" returns successfully" Sep 8 23:55:01.023924 kubelet[2680]: I0908 23:55:01.023788 2680 scope.go:117] "RemoveContainer" containerID="520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce" Sep 8 23:55:01.024348 containerd[1509]: time="2025-09-08T23:55:01.024313011Z" level=error msg="ContainerStatus for \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\": not found" Sep 8 23:55:01.031408 kubelet[2680]: E0908 23:55:01.031350 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\": not found" containerID="520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce" Sep 8 23:55:01.031599 kubelet[2680]: I0908 23:55:01.031405 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce"} err="failed to get container status \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"520dabe62624d13ec874a6b1dc795840f5a3ae18952ec4a9abb67294e9b802ce\": not found" Sep 8 23:55:01.031599 kubelet[2680]: I0908 23:55:01.031521 2680 scope.go:117] "RemoveContainer" containerID="e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4" Sep 8 23:55:01.033016 containerd[1509]: time="2025-09-08T23:55:01.032966166Z" level=info msg="RemoveContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\"" Sep 8 23:55:01.037305 containerd[1509]: time="2025-09-08T23:55:01.037269556Z" level=info msg="RemoveContainer for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" returns successfully" Sep 8 23:55:01.037562 kubelet[2680]: I0908 23:55:01.037453 2680 scope.go:117] "RemoveContainer" containerID="264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b" Sep 8 23:55:01.038765 containerd[1509]: time="2025-09-08T23:55:01.038736410Z" level=info msg="RemoveContainer for \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\"" Sep 8 23:55:01.043191 containerd[1509]: time="2025-09-08T23:55:01.043155744Z" level=info msg="RemoveContainer for \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\" returns successfully" Sep 8 23:55:01.043384 kubelet[2680]: I0908 23:55:01.043352 2680 scope.go:117] "RemoveContainer" containerID="45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac" Sep 8 23:55:01.044295 containerd[1509]: time="2025-09-08T23:55:01.044271324Z" level=info msg="RemoveContainer for \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\"" Sep 8 23:55:01.047928 containerd[1509]: time="2025-09-08T23:55:01.047899164Z" level=info msg="RemoveContainer for \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\" returns successfully" Sep 8 23:55:01.048087 kubelet[2680]: I0908 23:55:01.048059 2680 scope.go:117] "RemoveContainer" containerID="ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20" Sep 8 23:55:01.049002 containerd[1509]: time="2025-09-08T23:55:01.048967355Z" level=info msg="RemoveContainer for \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\"" Sep 8 23:55:01.052204 containerd[1509]: time="2025-09-08T23:55:01.052171276Z" level=info msg="RemoveContainer for \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\" returns successfully" Sep 8 23:55:01.052326 kubelet[2680]: I0908 23:55:01.052305 2680 scope.go:117] "RemoveContainer" containerID="44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35" Sep 8 23:55:01.053369 containerd[1509]: time="2025-09-08T23:55:01.053157323Z" level=info msg="RemoveContainer for \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\"" Sep 8 23:55:01.057181 containerd[1509]: time="2025-09-08T23:55:01.057149713Z" level=info msg="RemoveContainer for \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\" returns successfully" Sep 8 23:55:01.057366 kubelet[2680]: I0908 23:55:01.057283 2680 scope.go:117] "RemoveContainer" containerID="e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4" Sep 8 23:55:01.057516 containerd[1509]: time="2025-09-08T23:55:01.057465882Z" level=error msg="ContainerStatus for \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\": not found" Sep 8 23:55:01.057654 kubelet[2680]: E0908 23:55:01.057629 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\": not found" containerID="e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4" Sep 8 23:55:01.057708 kubelet[2680]: I0908 23:55:01.057662 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4"} err="failed to get container status \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9dd2676ad3629dccec6b92bf813289c9a685fca36fffd3e804b7ededeb4bee4\": not found" Sep 8 23:55:01.057708 kubelet[2680]: I0908 23:55:01.057689 2680 scope.go:117] "RemoveContainer" containerID="264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b" Sep 8 23:55:01.057873 containerd[1509]: time="2025-09-08T23:55:01.057841412Z" level=error msg="ContainerStatus for \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\": not found" Sep 8 23:55:01.057992 kubelet[2680]: E0908 23:55:01.057971 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\": not found" containerID="264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b" Sep 8 23:55:01.058037 kubelet[2680]: I0908 23:55:01.057997 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b"} err="failed to get container status \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\": rpc error: code = NotFound desc = an error occurred when try to find container \"264770f21acd8842197c32d5e2fd3e4d308b7afa689905ffc07fbbd24534045b\": not found" Sep 8 23:55:01.058037 kubelet[2680]: I0908 23:55:01.058014 2680 scope.go:117] "RemoveContainer" containerID="45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac" Sep 8 23:55:01.058180 containerd[1509]: time="2025-09-08T23:55:01.058149186Z" level=error msg="ContainerStatus for \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\": not found" Sep 8 23:55:01.058275 kubelet[2680]: E0908 23:55:01.058255 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\": not found" containerID="45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac" Sep 8 23:55:01.058313 kubelet[2680]: I0908 23:55:01.058278 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac"} err="failed to get container status \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\": rpc error: code = NotFound desc = an error occurred when try to find container \"45642724806916ffe5192282c1a422112601b6417e92c866bb95ba40ad8fbeac\": not found" Sep 8 23:55:01.058313 kubelet[2680]: I0908 23:55:01.058292 2680 scope.go:117] "RemoveContainer" containerID="ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20" Sep 8 23:55:01.058464 containerd[1509]: time="2025-09-08T23:55:01.058434126Z" level=error msg="ContainerStatus for \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\": not found" Sep 8 23:55:01.058567 kubelet[2680]: E0908 23:55:01.058547 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\": not found" containerID="ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20" Sep 8 23:55:01.058624 kubelet[2680]: I0908 23:55:01.058570 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20"} err="failed to get container status \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff35d9505a9301cbb09b2837f8fba331fd8625606fb5373ce0d5e4bd658ebd20\": not found" Sep 8 23:55:01.058624 kubelet[2680]: I0908 23:55:01.058595 2680 scope.go:117] "RemoveContainer" containerID="44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35" Sep 8 23:55:01.058757 containerd[1509]: time="2025-09-08T23:55:01.058734707Z" level=error msg="ContainerStatus for \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\": not found" Sep 8 23:55:01.058854 kubelet[2680]: E0908 23:55:01.058838 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\": not found" containerID="44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35" Sep 8 23:55:01.058893 kubelet[2680]: I0908 23:55:01.058856 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35"} err="failed to get container status \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\": rpc error: code = NotFound desc = an error occurred when try to find container \"44e151638c6e7eafe5e39a0bcda0f3cc1ce24c8c9d9f9f78b727aeec4ac53d35\": not found" Sep 8 23:55:01.482899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3e477723e1c623a4f23f9015a61462d75c6cd48a182745d3b0cc901e7bcc25c-rootfs.mount: Deactivated successfully. Sep 8 23:55:01.483048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3-rootfs.mount: Deactivated successfully. Sep 8 23:55:01.483153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c575738f7154d5fc4b2cb00f9321d97247bf8fadb411b95f5604047f4bf51c3-shm.mount: Deactivated successfully. Sep 8 23:55:01.483263 systemd[1]: var-lib-kubelet-pods-19fc6eef\x2d6e12\x2d410d\x2d81f2\x2dcf8c13c72547-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgvqg.mount: Deactivated successfully. Sep 8 23:55:01.483368 systemd[1]: var-lib-kubelet-pods-5ea01d2d\x2dea55\x2d40dd\x2d85b3\x2de04f768a9d6f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:55:01.483499 systemd[1]: var-lib-kubelet-pods-5ea01d2d\x2dea55\x2d40dd\x2d85b3\x2de04f768a9d6f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhndrd.mount: Deactivated successfully. Sep 8 23:55:01.483638 systemd[1]: var-lib-kubelet-pods-5ea01d2d\x2dea55\x2d40dd\x2d85b3\x2de04f768a9d6f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:55:01.735442 kubelet[2680]: I0908 23:55:01.735275 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc6eef-6e12-410d-81f2-cf8c13c72547" path="/var/lib/kubelet/pods/19fc6eef-6e12-410d-81f2-cf8c13c72547/volumes" Sep 8 23:55:01.736287 kubelet[2680]: I0908 23:55:01.736243 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" path="/var/lib/kubelet/pods/5ea01d2d-ea55-40dd-85b3-e04f768a9d6f/volumes" Sep 8 23:55:02.405892 sshd[4386]: Connection closed by 10.0.0.1 port 52014 Sep 8 23:55:02.406392 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:02.417836 systemd[1]: sshd@28-10.0.0.55:22-10.0.0.1:52014.service: Deactivated successfully. Sep 8 23:55:02.420065 systemd[1]: session-29.scope: Deactivated successfully. Sep 8 23:55:02.420943 systemd-logind[1496]: Session 29 logged out. Waiting for processes to exit. Sep 8 23:55:02.428921 systemd[1]: Started sshd@29-10.0.0.55:22-10.0.0.1:52900.service - OpenSSH per-connection server daemon (10.0.0.1:52900). Sep 8 23:55:02.429518 systemd-logind[1496]: Removed session 29. Sep 8 23:55:02.470092 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 52900 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:55:02.471816 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:02.476772 systemd-logind[1496]: New session 30 of user core. Sep 8 23:55:02.485765 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 8 23:55:03.105998 sshd[4545]: Connection closed by 10.0.0.1 port 52900 Sep 8 23:55:03.107855 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:03.120669 systemd[1]: sshd@29-10.0.0.55:22-10.0.0.1:52900.service: Deactivated successfully. Sep 8 23:55:03.124887 systemd[1]: session-30.scope: Deactivated successfully. Sep 8 23:55:03.126108 kubelet[2680]: I0908 23:55:03.126056 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="19fc6eef-6e12-410d-81f2-cf8c13c72547" containerName="cilium-operator" Sep 8 23:55:03.126108 kubelet[2680]: I0908 23:55:03.126093 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ea01d2d-ea55-40dd-85b3-e04f768a9d6f" containerName="cilium-agent" Sep 8 23:55:03.131109 systemd-logind[1496]: Session 30 logged out. Waiting for processes to exit. Sep 8 23:55:03.144746 systemd[1]: Started sshd@30-10.0.0.55:22-10.0.0.1:52914.service - OpenSSH per-connection server daemon (10.0.0.1:52914). Sep 8 23:55:03.149585 systemd-logind[1496]: Removed session 30. Sep 8 23:55:03.159022 systemd[1]: Created slice kubepods-burstable-pod5802cf06_d334_4680_9d83_06a56fa4de74.slice - libcontainer container kubepods-burstable-pod5802cf06_d334_4680_9d83_06a56fa4de74.slice. Sep 8 23:55:03.189756 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 52914 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:55:03.191791 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:03.196194 systemd-logind[1496]: New session 31 of user core. Sep 8 23:55:03.207773 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 8 23:55:03.222844 kubelet[2680]: I0908 23:55:03.222779 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-cni-path\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.222844 kubelet[2680]: I0908 23:55:03.222824 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-xtables-lock\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.222844 kubelet[2680]: I0908 23:55:03.222848 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5802cf06-d334-4680-9d83-06a56fa4de74-cilium-ipsec-secrets\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223093 kubelet[2680]: I0908 23:55:03.222870 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-cilium-run\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223093 kubelet[2680]: I0908 23:55:03.222889 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-host-proc-sys-kernel\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223093 kubelet[2680]: I0908 23:55:03.223034 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-cilium-cgroup\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223093 kubelet[2680]: I0908 23:55:03.223089 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-lib-modules\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223110 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5802cf06-d334-4680-9d83-06a56fa4de74-clustermesh-secrets\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223141 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-host-proc-sys-net\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223164 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-bpf-maps\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223200 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585g4\" (UniqueName: \"kubernetes.io/projected/5802cf06-d334-4680-9d83-06a56fa4de74-kube-api-access-585g4\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223232 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5802cf06-d334-4680-9d83-06a56fa4de74-hubble-tls\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223265 kubelet[2680]: I0908 23:55:03.223260 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-hostproc\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223469 kubelet[2680]: I0908 23:55:03.223282 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5802cf06-d334-4680-9d83-06a56fa4de74-etc-cni-netd\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.223469 kubelet[2680]: I0908 23:55:03.223304 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5802cf06-d334-4680-9d83-06a56fa4de74-cilium-config-path\") pod \"cilium-rm42r\" (UID: \"5802cf06-d334-4680-9d83-06a56fa4de74\") " pod="kube-system/cilium-rm42r" Sep 8 23:55:03.260258 sshd[4559]: Connection closed by 10.0.0.1 port 52914 Sep 8 23:55:03.260672 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:03.274003 systemd[1]: sshd@30-10.0.0.55:22-10.0.0.1:52914.service: Deactivated successfully. Sep 8 23:55:03.276251 systemd[1]: session-31.scope: Deactivated successfully. Sep 8 23:55:03.277892 systemd-logind[1496]: Session 31 logged out. Waiting for processes to exit. Sep 8 23:55:03.283868 systemd[1]: Started sshd@31-10.0.0.55:22-10.0.0.1:52924.service - OpenSSH per-connection server daemon (10.0.0.1:52924). Sep 8 23:55:03.284777 systemd-logind[1496]: Removed session 31. Sep 8 23:55:03.324596 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 52924 ssh2: RSA SHA256:YG8Fe1PH14ztyHrLZmAsO4qvauXx/FmdEEIObGbnTog Sep 8 23:55:03.326555 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:03.345385 systemd-logind[1496]: New session 32 of user core. Sep 8 23:55:03.351793 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 8 23:55:03.468362 kubelet[2680]: E0908 23:55:03.468213 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:03.468995 containerd[1509]: time="2025-09-08T23:55:03.468930751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm42r,Uid:5802cf06-d334-4680-9d83-06a56fa4de74,Namespace:kube-system,Attempt:0,}" Sep 8 23:55:03.495015 containerd[1509]: time="2025-09-08T23:55:03.494866248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:03.495015 containerd[1509]: time="2025-09-08T23:55:03.494968257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:03.495338 containerd[1509]: time="2025-09-08T23:55:03.495003183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:03.495338 containerd[1509]: time="2025-09-08T23:55:03.495151178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:03.517846 systemd[1]: Started cri-containerd-78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581.scope - libcontainer container 78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581. Sep 8 23:55:03.548647 containerd[1509]: time="2025-09-08T23:55:03.548562902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm42r,Uid:5802cf06-d334-4680-9d83-06a56fa4de74,Namespace:kube-system,Attempt:0,} returns sandbox id \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\"" Sep 8 23:55:03.551639 kubelet[2680]: E0908 23:55:03.549785 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:03.553873 containerd[1509]: time="2025-09-08T23:55:03.553836971Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:55:03.569363 containerd[1509]: time="2025-09-08T23:55:03.569297045Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9\"" Sep 8 23:55:03.569886 containerd[1509]: time="2025-09-08T23:55:03.569856657Z" level=info msg="StartContainer for \"565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9\"" Sep 8 23:55:03.603864 systemd[1]: Started cri-containerd-565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9.scope - libcontainer container 565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9. Sep 8 23:55:03.635175 containerd[1509]: time="2025-09-08T23:55:03.635124407Z" level=info msg="StartContainer for \"565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9\" returns successfully" Sep 8 23:55:03.648302 systemd[1]: cri-containerd-565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9.scope: Deactivated successfully. Sep 8 23:55:03.684249 containerd[1509]: time="2025-09-08T23:55:03.684170275Z" level=info msg="shim disconnected" id=565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9 namespace=k8s.io Sep 8 23:55:03.684249 containerd[1509]: time="2025-09-08T23:55:03.684232370Z" level=warning msg="cleaning up after shim disconnected" id=565ebf8020c5bb2cafb26a2b6aa2258c9597aace48458c758b669bc43f717ec9 namespace=k8s.io Sep 8 23:55:03.684249 containerd[1509]: time="2025-09-08T23:55:03.684241748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:04.022593 kubelet[2680]: E0908 23:55:04.022545 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:04.024338 containerd[1509]: time="2025-09-08T23:55:04.024282774Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:55:04.043391 containerd[1509]: time="2025-09-08T23:55:04.043322361Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5\"" Sep 8 23:55:04.043861 containerd[1509]: time="2025-09-08T23:55:04.043841649Z" level=info msg="StartContainer for \"1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5\"" Sep 8 23:55:04.072863 systemd[1]: Started cri-containerd-1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5.scope - libcontainer container 1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5. Sep 8 23:55:04.106133 containerd[1509]: time="2025-09-08T23:55:04.106082994Z" level=info msg="StartContainer for \"1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5\" returns successfully" Sep 8 23:55:04.112258 systemd[1]: cri-containerd-1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5.scope: Deactivated successfully. Sep 8 23:55:04.140593 containerd[1509]: time="2025-09-08T23:55:04.140520968Z" level=info msg="shim disconnected" id=1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5 namespace=k8s.io Sep 8 23:55:04.140593 containerd[1509]: time="2025-09-08T23:55:04.140576632Z" level=warning msg="cleaning up after shim disconnected" id=1d4fbbd979e6c61e8657d44ef06544164df096de25be8f81e918dbab28ab23b5 namespace=k8s.io Sep 8 23:55:04.140593 containerd[1509]: time="2025-09-08T23:55:04.140584668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:04.851444 kubelet[2680]: E0908 23:55:04.851379 2680 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:55:05.026189 kubelet[2680]: E0908 23:55:05.026145 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:05.028154 containerd[1509]: time="2025-09-08T23:55:05.028094439Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:55:05.062916 containerd[1509]: time="2025-09-08T23:55:05.062836865Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310\"" Sep 8 23:55:05.063728 containerd[1509]: time="2025-09-08T23:55:05.063562687Z" level=info msg="StartContainer for \"cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310\"" Sep 8 23:55:05.103861 systemd[1]: Started cri-containerd-cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310.scope - libcontainer container cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310. Sep 8 23:55:05.145035 containerd[1509]: time="2025-09-08T23:55:05.144856138Z" level=info msg="StartContainer for \"cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310\" returns successfully" Sep 8 23:55:05.148819 systemd[1]: cri-containerd-cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310.scope: Deactivated successfully. Sep 8 23:55:05.178344 containerd[1509]: time="2025-09-08T23:55:05.178245180Z" level=info msg="shim disconnected" id=cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310 namespace=k8s.io Sep 8 23:55:05.178344 containerd[1509]: time="2025-09-08T23:55:05.178317715Z" level=warning msg="cleaning up after shim disconnected" id=cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310 namespace=k8s.io Sep 8 23:55:05.178344 containerd[1509]: time="2025-09-08T23:55:05.178341940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:05.330975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc49cec84cbd712c60dde6bc8b4034d969f8b04fef62460797c95626652fa310-rootfs.mount: Deactivated successfully. Sep 8 23:55:06.030127 kubelet[2680]: E0908 23:55:06.030087 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:06.031962 containerd[1509]: time="2025-09-08T23:55:06.031919348Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:55:06.046523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969726986.mount: Deactivated successfully. Sep 8 23:55:06.050467 containerd[1509]: time="2025-09-08T23:55:06.050312222Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca\"" Sep 8 23:55:06.052486 containerd[1509]: time="2025-09-08T23:55:06.052446680Z" level=info msg="StartContainer for \"8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca\"" Sep 8 23:55:06.109750 systemd[1]: Started cri-containerd-8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca.scope - libcontainer container 8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca. Sep 8 23:55:06.137936 systemd[1]: cri-containerd-8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca.scope: Deactivated successfully. Sep 8 23:55:06.140518 containerd[1509]: time="2025-09-08T23:55:06.140480771Z" level=info msg="StartContainer for \"8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca\" returns successfully" Sep 8 23:55:06.164911 containerd[1509]: time="2025-09-08T23:55:06.164842359Z" level=info msg="shim disconnected" id=8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca namespace=k8s.io Sep 8 23:55:06.164911 containerd[1509]: time="2025-09-08T23:55:06.164906348Z" level=warning msg="cleaning up after shim disconnected" id=8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca namespace=k8s.io Sep 8 23:55:06.164911 containerd[1509]: time="2025-09-08T23:55:06.164917459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:06.330978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de90d89d298757da2a116bbdf1548e7c845f24f82c427f46d62d9deef0341ca-rootfs.mount: Deactivated successfully. Sep 8 23:55:07.035151 kubelet[2680]: E0908 23:55:07.034739 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:07.037684 containerd[1509]: time="2025-09-08T23:55:07.037624218Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:55:07.058950 containerd[1509]: time="2025-09-08T23:55:07.058558000Z" level=info msg="CreateContainer within sandbox \"78be4282cf7d3190ce18bb7f824a74c52a2ad0f0d72b02f5093e8c448561c581\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d470912988b781d8160a60497b6d0e01939f7b893bd07e255db601c6954f3e13\"" Sep 8 23:55:07.060680 containerd[1509]: time="2025-09-08T23:55:07.059406772Z" level=info msg="StartContainer for \"d470912988b781d8160a60497b6d0e01939f7b893bd07e255db601c6954f3e13\"" Sep 8 23:55:07.092767 systemd[1]: Started cri-containerd-d470912988b781d8160a60497b6d0e01939f7b893bd07e255db601c6954f3e13.scope - libcontainer container d470912988b781d8160a60497b6d0e01939f7b893bd07e255db601c6954f3e13. Sep 8 23:55:07.127577 containerd[1509]: time="2025-09-08T23:55:07.127526558Z" level=info msg="StartContainer for \"d470912988b781d8160a60497b6d0e01939f7b893bd07e255db601c6954f3e13\" returns successfully" Sep 8 23:55:07.617026 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 8 23:55:08.045369 kubelet[2680]: E0908 23:55:08.045314 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:08.075295 kubelet[2680]: I0908 23:55:08.075110 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rm42r" podStartSLOduration=5.07507808 podStartE2EDuration="5.07507808s" podCreationTimestamp="2025-09-08 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:55:08.074777359 +0000 UTC m=+108.650576094" watchObservedRunningTime="2025-09-08 23:55:08.07507808 +0000 UTC m=+108.650876795" Sep 8 23:55:09.468961 kubelet[2680]: E0908 23:55:09.468884 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:11.030403 systemd-networkd[1432]: lxc_health: Link UP Sep 8 23:55:11.040019 systemd-networkd[1432]: lxc_health: Gained carrier Sep 8 23:55:11.473865 kubelet[2680]: E0908 23:55:11.470274 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:12.055426 kubelet[2680]: E0908 23:55:12.055388 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:12.326978 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 8 23:55:13.058009 kubelet[2680]: E0908 23:55:13.057968 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:13.732463 kubelet[2680]: E0908 23:55:13.732420 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:16.164081 sshd[4572]: Connection closed by 10.0.0.1 port 52924 Sep 8 23:55:16.164970 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:16.170028 systemd[1]: sshd@31-10.0.0.55:22-10.0.0.1:52924.service: Deactivated successfully. Sep 8 23:55:16.172584 systemd[1]: session-32.scope: Deactivated successfully. Sep 8 23:55:16.173404 systemd-logind[1496]: Session 32 logged out. Waiting for processes to exit. Sep 8 23:55:16.174350 systemd-logind[1496]: Removed session 32.