Sep 8 23:56:55.904775 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 8 23:56:55.904798 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:56:55.904810 kernel: BIOS-provided physical RAM map: Sep 8 23:56:55.904817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:56:55.904823 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:56:55.904830 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:56:55.904838 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:56:55.904845 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:56:55.904851 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:56:55.904858 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:56:55.904865 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 8 23:56:55.904874 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:56:55.904885 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:56:55.904892 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:56:55.904903 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:56:55.904910 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:56:55.904920 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:56:55.904927 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:56:55.904934 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:56:55.904941 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:56:55.904948 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:56:55.904956 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:56:55.904963 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:56:55.904970 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:56:55.904977 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:56:55.904984 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:56:55.904992 kernel: NX (Execute Disable) protection: active Sep 8 23:56:55.905001 kernel: APIC: Static calls initialized Sep 8 23:56:55.905008 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:56:55.905016 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:56:55.905023 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:56:55.905030 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:56:55.905037 kernel: extended physical RAM map: Sep 8 23:56:55.905044 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:56:55.905051 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:56:55.905060 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:56:55.905069 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:56:55.905080 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:56:55.905089 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:56:55.905101 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:56:55.905112 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 8 23:56:55.905119 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 8 23:56:55.905127 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 8 23:56:55.905134 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 8 23:56:55.905141 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 8 23:56:55.905154 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:56:55.905161 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:56:55.905169 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:56:55.905176 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:56:55.905184 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:56:55.905191 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:56:55.905199 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:56:55.905206 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:56:55.905213 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:56:55.905223 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:56:55.905231 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:56:55.905238 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:56:55.905245 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:56:55.905255 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:56:55.905262 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:56:55.905270 kernel: efi: EFI v2.7 by EDK II Sep 8 23:56:55.905277 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 8 23:56:55.905285 kernel: random: crng init done Sep 8 23:56:55.905292 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 8 23:56:55.905300 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 8 23:56:55.905311 kernel: secureboot: Secure boot disabled Sep 8 23:56:55.905324 kernel: SMBIOS 2.8 present. Sep 8 23:56:55.905334 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 8 23:56:55.905344 kernel: Hypervisor detected: KVM Sep 8 23:56:55.905353 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 8 23:56:55.905360 kernel: kvm-clock: using sched offset of 4010379375 cycles Sep 8 23:56:55.905368 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 8 23:56:55.905376 kernel: tsc: Detected 2794.748 MHz processor Sep 8 23:56:55.905384 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 8 23:56:55.905392 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 8 23:56:55.905400 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 8 23:56:55.905410 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 8 23:56:55.905418 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 8 23:56:55.905425 kernel: Using GB pages for direct mapping Sep 8 23:56:55.905433 kernel: ACPI: Early table checksum verification disabled Sep 8 23:56:55.905441 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 8 23:56:55.905449 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:56:55.905458 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905469 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905480 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 8 23:56:55.905494 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905504 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905563 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905575 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:55.905585 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 8 23:56:55.905593 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 8 23:56:55.905601 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 8 23:56:55.905608 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 8 23:56:55.905616 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 8 23:56:55.905627 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 8 23:56:55.905635 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 8 23:56:55.905643 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 8 23:56:55.905653 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 8 23:56:55.905664 kernel: No NUMA configuration found Sep 8 23:56:55.905674 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 8 23:56:55.905685 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 8 23:56:55.905695 kernel: Zone ranges: Sep 8 23:56:55.905706 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 8 23:56:55.905720 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 8 23:56:55.905729 kernel: Normal empty Sep 8 23:56:55.905747 kernel: Movable zone start for each node Sep 8 23:56:55.905755 kernel: Early memory node ranges Sep 8 23:56:55.905763 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 8 23:56:55.905771 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 8 23:56:55.905779 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 8 23:56:55.905786 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 8 23:56:55.905794 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 8 23:56:55.905801 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 8 23:56:55.905812 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 8 23:56:55.905819 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 8 23:56:55.905827 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 8 23:56:55.905834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:56:55.905842 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 8 23:56:55.905858 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 8 23:56:55.905871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:56:55.905882 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 8 23:56:55.905894 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 8 23:56:55.905905 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 8 23:56:55.905919 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 8 23:56:55.905932 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 8 23:56:55.905940 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 8 23:56:55.905948 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 8 23:56:55.905956 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 8 23:56:55.905964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 8 23:56:55.905975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 8 23:56:55.905983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 8 23:56:55.905991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 8 23:56:55.905999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 8 23:56:55.906006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 8 23:56:55.906014 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 8 23:56:55.906022 kernel: TSC deadline timer available Sep 8 23:56:55.906030 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 8 23:56:55.906038 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 8 23:56:55.906048 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 8 23:56:55.906056 kernel: kvm-guest: setup PV sched yield Sep 8 23:56:55.906064 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 8 23:56:55.906072 kernel: Booting paravirtualized kernel on KVM Sep 8 23:56:55.906081 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 8 23:56:55.906089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 8 23:56:55.906097 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 8 23:56:55.906105 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 8 23:56:55.906113 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 8 23:56:55.906125 kernel: kvm-guest: PV spinlocks enabled Sep 8 23:56:55.906134 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 8 23:56:55.906145 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:56:55.906154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:56:55.906162 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:56:55.906172 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:56:55.906180 kernel: Fallback order for Node 0: 0 Sep 8 23:56:55.906188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 8 23:56:55.906196 kernel: Policy zone: DMA32 Sep 8 23:56:55.906207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:56:55.906215 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 177824K reserved, 0K cma-reserved) Sep 8 23:56:55.906223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:56:55.906231 kernel: ftrace: allocating 37943 entries in 149 pages Sep 8 23:56:55.906239 kernel: ftrace: allocated 149 pages with 4 groups Sep 8 23:56:55.906247 kernel: Dynamic Preempt: voluntary Sep 8 23:56:55.906255 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:56:55.906263 kernel: rcu: RCU event tracing is enabled. Sep 8 23:56:55.906274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:56:55.906282 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:56:55.906290 kernel: Rude variant of Tasks RCU enabled. Sep 8 23:56:55.906298 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:56:55.906306 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:56:55.906314 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:56:55.906322 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 8 23:56:55.906330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:56:55.906338 kernel: Console: colour dummy device 80x25 Sep 8 23:56:55.906346 kernel: printk: console [ttyS0] enabled Sep 8 23:56:55.906356 kernel: ACPI: Core revision 20230628 Sep 8 23:56:55.906364 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 8 23:56:55.906372 kernel: APIC: Switch to symmetric I/O mode setup Sep 8 23:56:55.906380 kernel: x2apic enabled Sep 8 23:56:55.906388 kernel: APIC: Switched APIC routing to: physical x2apic Sep 8 23:56:55.906399 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 8 23:56:55.906407 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 8 23:56:55.906415 kernel: kvm-guest: setup PV IPIs Sep 8 23:56:55.906423 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 8 23:56:55.906436 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 8 23:56:55.906447 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 8 23:56:55.906458 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 8 23:56:55.906469 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 8 23:56:55.906480 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 8 23:56:55.906489 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 8 23:56:55.906497 kernel: Spectre V2 : Mitigation: Retpolines Sep 8 23:56:55.906505 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 8 23:56:55.906513 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 8 23:56:55.906535 kernel: active return thunk: retbleed_return_thunk Sep 8 23:56:55.906543 kernel: RETBleed: Mitigation: untrained return thunk Sep 8 23:56:55.906551 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 8 23:56:55.906560 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 8 23:56:55.906567 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 8 23:56:55.906576 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 8 23:56:55.906587 kernel: active return thunk: srso_return_thunk Sep 8 23:56:55.906595 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 8 23:56:55.906606 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 8 23:56:55.906614 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 8 23:56:55.906622 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 8 23:56:55.906629 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 8 23:56:55.906638 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 8 23:56:55.906646 kernel: Freeing SMP alternatives memory: 32K Sep 8 23:56:55.906654 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:56:55.906662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:56:55.906670 kernel: landlock: Up and running. Sep 8 23:56:55.906683 kernel: SELinux: Initializing. Sep 8 23:56:55.906694 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:56:55.906705 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:56:55.906716 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 8 23:56:55.906727 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:56:55.906739 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:56:55.906760 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:56:55.906771 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 8 23:56:55.906785 kernel: ... version: 0 Sep 8 23:56:55.906796 kernel: ... bit width: 48 Sep 8 23:56:55.906807 kernel: ... generic registers: 6 Sep 8 23:56:55.906817 kernel: ... value mask: 0000ffffffffffff Sep 8 23:56:55.906825 kernel: ... max period: 00007fffffffffff Sep 8 23:56:55.906833 kernel: ... fixed-purpose events: 0 Sep 8 23:56:55.906841 kernel: ... event mask: 000000000000003f Sep 8 23:56:55.906849 kernel: signal: max sigframe size: 1776 Sep 8 23:56:55.906856 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:56:55.906865 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:56:55.906875 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:56:55.906883 kernel: smpboot: x86: Booting SMP configuration: Sep 8 23:56:55.906891 kernel: .... node #0, CPUs: #1 #2 #3 Sep 8 23:56:55.906899 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:56:55.906907 kernel: smpboot: Max logical packages: 1 Sep 8 23:56:55.906915 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 8 23:56:55.906922 kernel: devtmpfs: initialized Sep 8 23:56:55.906930 kernel: x86/mm: Memory block size: 128MB Sep 8 23:56:55.906938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 8 23:56:55.906949 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 8 23:56:55.906957 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 8 23:56:55.906965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 8 23:56:55.906973 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 8 23:56:55.906981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 8 23:56:55.906989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:56:55.906997 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:56:55.907005 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:56:55.907013 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:56:55.907023 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:56:55.907031 kernel: audit: type=2000 audit(1757375816.037:1): state=initialized audit_enabled=0 res=1 Sep 8 23:56:55.907039 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:56:55.907047 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 8 23:56:55.907055 kernel: cpuidle: using governor menu Sep 8 23:56:55.907063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:56:55.907073 kernel: dca service started, version 1.12.1 Sep 8 23:56:55.907085 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 8 23:56:55.907096 kernel: PCI: Using configuration type 1 for base access Sep 8 23:56:55.907110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 8 23:56:55.907121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:56:55.907132 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:56:55.907142 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:56:55.907150 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:56:55.907158 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:56:55.907166 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:56:55.907174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:56:55.907182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:56:55.907193 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 8 23:56:55.907201 kernel: ACPI: Interpreter enabled Sep 8 23:56:55.907209 kernel: ACPI: PM: (supports S0 S3 S5) Sep 8 23:56:55.907217 kernel: ACPI: Using IOAPIC for interrupt routing Sep 8 23:56:55.907225 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 8 23:56:55.907233 kernel: PCI: Using E820 reservations for host bridge windows Sep 8 23:56:55.907241 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 8 23:56:55.907249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:56:55.907488 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:56:55.907653 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 8 23:56:55.907798 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 8 23:56:55.907810 kernel: PCI host bridge to bus 0000:00 Sep 8 23:56:55.907975 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 8 23:56:55.908100 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 8 23:56:55.908219 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 8 23:56:55.908343 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 8 23:56:55.908474 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 8 23:56:55.908646 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:56:55.908796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:56:55.909008 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 8 23:56:55.909175 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 8 23:56:55.909327 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 8 23:56:55.909468 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 8 23:56:55.909633 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 8 23:56:55.909792 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 8 23:56:55.909937 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 8 23:56:55.910172 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:56:55.910324 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 8 23:56:55.910487 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 8 23:56:55.910653 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 8 23:56:55.910825 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 8 23:56:55.910961 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 8 23:56:55.911093 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 8 23:56:55.911233 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 8 23:56:55.911475 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 8 23:56:55.911647 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 8 23:56:55.911791 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 8 23:56:55.911931 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 8 23:56:55.912083 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 8 23:56:55.912254 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 8 23:56:55.912389 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 8 23:56:55.912650 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 8 23:56:55.912869 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 8 23:56:55.913093 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 8 23:56:55.913268 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 8 23:56:55.913423 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 8 23:56:55.913440 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 8 23:56:55.913451 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 8 23:56:55.913462 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 8 23:56:55.913479 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 8 23:56:55.913490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 8 23:56:55.913501 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 8 23:56:55.913528 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 8 23:56:55.913540 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 8 23:56:55.913551 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 8 23:56:55.913562 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 8 23:56:55.913573 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 8 23:56:55.913584 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 8 23:56:55.913600 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 8 23:56:55.913611 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 8 23:56:55.913620 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 8 23:56:55.913630 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 8 23:56:55.913641 kernel: iommu: Default domain type: Translated Sep 8 23:56:55.913652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 8 23:56:55.913663 kernel: efivars: Registered efivars operations Sep 8 23:56:55.913675 kernel: PCI: Using ACPI for IRQ routing Sep 8 23:56:55.913686 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 8 23:56:55.913700 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 8 23:56:55.913711 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 8 23:56:55.913719 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 8 23:56:55.913727 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 8 23:56:55.913735 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 8 23:56:55.913751 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 8 23:56:55.913760 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 8 23:56:55.913768 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 8 23:56:55.913916 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 8 23:56:55.914073 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 8 23:56:55.914215 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 8 23:56:55.914228 kernel: vgaarb: loaded Sep 8 23:56:55.914238 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 8 23:56:55.914249 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 8 23:56:55.914260 kernel: clocksource: Switched to clocksource kvm-clock Sep 8 23:56:55.914271 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:56:55.914282 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:56:55.914298 kernel: pnp: PnP ACPI init Sep 8 23:56:55.914490 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 8 23:56:55.914505 kernel: pnp: PnP ACPI: found 6 devices Sep 8 23:56:55.914529 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 8 23:56:55.914540 kernel: NET: Registered PF_INET protocol family Sep 8 23:56:55.914579 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:56:55.914593 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:56:55.914605 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:56:55.914620 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:56:55.914632 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:56:55.914643 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:56:55.914655 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:56:55.914666 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:56:55.914678 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:56:55.914688 kernel: NET: Registered PF_XDP protocol family Sep 8 23:56:55.914843 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 8 23:56:55.914978 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 8 23:56:55.915112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 8 23:56:55.915250 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 8 23:56:55.915454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 8 23:56:55.915635 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 8 23:56:55.915773 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 8 23:56:55.915903 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:56:55.915919 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:56:55.915930 kernel: Initialise system trusted keyrings Sep 8 23:56:55.915948 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:56:55.915959 kernel: Key type asymmetric registered Sep 8 23:56:55.915970 kernel: Asymmetric key parser 'x509' registered Sep 8 23:56:55.915982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 8 23:56:55.915994 kernel: io scheduler mq-deadline registered Sep 8 23:56:55.916005 kernel: io scheduler kyber registered Sep 8 23:56:55.916017 kernel: io scheduler bfq registered Sep 8 23:56:55.916029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 8 23:56:55.916041 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 8 23:56:55.916056 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 8 23:56:55.916073 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 8 23:56:55.916084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:56:55.916096 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 8 23:56:55.916108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 8 23:56:55.916123 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 8 23:56:55.916135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 8 23:56:55.916147 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 8 23:56:55.916379 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 8 23:56:55.916562 kernel: rtc_cmos 00:04: registered as rtc0 Sep 8 23:56:55.916723 kernel: rtc_cmos 00:04: setting system clock to 2025-09-08T23:56:55 UTC (1757375815) Sep 8 23:56:55.916882 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 8 23:56:55.916895 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 8 23:56:55.916910 kernel: efifb: probing for efifb Sep 8 23:56:55.916918 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 8 23:56:55.916927 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 8 23:56:55.916939 kernel: efifb: scrolling: redraw Sep 8 23:56:55.916951 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 8 23:56:55.916962 kernel: Console: switching to colour frame buffer device 160x50 Sep 8 23:56:55.916974 kernel: fb0: EFI VGA frame buffer device Sep 8 23:56:55.916985 kernel: pstore: Using crash dump compression: deflate Sep 8 23:56:55.916997 kernel: pstore: Registered efi_pstore as persistent store backend Sep 8 23:56:55.917012 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:56:55.917023 kernel: Segment Routing with IPv6 Sep 8 23:56:55.917035 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:56:55.917046 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:56:55.917058 kernel: Key type dns_resolver registered Sep 8 23:56:55.917069 kernel: IPI shorthand broadcast: enabled Sep 8 23:56:55.917081 kernel: sched_clock: Marking stable (1064002317, 323230143)->(1414139822, -26907362) Sep 8 23:56:55.917093 kernel: registered taskstats version 1 Sep 8 23:56:55.917104 kernel: Loading compiled-in X.509 certificates Sep 8 23:56:55.917113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 8 23:56:55.917125 kernel: Key type .fscrypt registered Sep 8 23:56:55.917133 kernel: Key type fscrypt-provisioning registered Sep 8 23:56:55.917141 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:56:55.917150 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:56:55.917158 kernel: ima: No architecture policies found Sep 8 23:56:55.917166 kernel: clk: Disabling unused clocks Sep 8 23:56:55.917175 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 8 23:56:55.917183 kernel: Write protecting the kernel read-only data: 38912k Sep 8 23:56:55.917194 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 8 23:56:55.917202 kernel: Run /init as init process Sep 8 23:56:55.917210 kernel: with arguments: Sep 8 23:56:55.917219 kernel: /init Sep 8 23:56:55.917227 kernel: with environment: Sep 8 23:56:55.917237 kernel: HOME=/ Sep 8 23:56:55.917249 kernel: TERM=linux Sep 8 23:56:55.917260 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:56:55.917272 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:56:55.917291 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:56:55.917302 systemd[1]: Detected virtualization kvm. Sep 8 23:56:55.917311 systemd[1]: Detected architecture x86-64. Sep 8 23:56:55.917323 systemd[1]: Running in initrd. Sep 8 23:56:55.917335 systemd[1]: No hostname configured, using default hostname. Sep 8 23:56:55.917348 systemd[1]: Hostname set to . Sep 8 23:56:55.917360 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:56:55.917374 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:56:55.917383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:56:55.917392 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:56:55.917401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:56:55.917410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:56:55.917420 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:56:55.917429 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:56:55.917442 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:56:55.917451 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:56:55.917460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:56:55.917469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:56:55.917478 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:56:55.917487 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:56:55.917495 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:56:55.917504 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:56:55.917526 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:56:55.917539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:56:55.917547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:56:55.917558 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:56:55.917571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:56:55.917583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:56:55.917595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:56:55.917608 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:56:55.917620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:56:55.917636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:56:55.917648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:56:55.917660 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:56:55.917672 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:56:55.917684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:56:55.917696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:55.917708 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:56:55.917721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:56:55.917736 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:56:55.917784 systemd-journald[194]: Collecting audit messages is disabled. Sep 8 23:56:55.917809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:56:55.917819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:55.917831 systemd-journald[194]: Journal started Sep 8 23:56:55.917858 systemd-journald[194]: Runtime Journal (/run/log/journal/9d32e83acd68400fbe890514b4f12c77) is 6M, max 48.2M, 42.2M free. Sep 8 23:56:55.917913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:55.913055 systemd-modules-load[195]: Inserted module 'overlay' Sep 8 23:56:55.922632 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:56:55.923262 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:56:55.927134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:56:55.928369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:56:55.942984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:56:55.947474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:56:55.949914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:55.958542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:56:55.960755 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 8 23:56:55.961669 kernel: Bridge firewalling registered Sep 8 23:56:55.964756 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:56:55.966965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:56:55.968771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:56:55.982087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:56:55.984537 dracut-cmdline[225]: dracut-dracut-053 Sep 8 23:56:55.987925 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:56:55.994660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:56:56.029699 systemd-resolved[241]: Positive Trust Anchors: Sep 8 23:56:56.029714 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:56:56.029754 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:56:56.032432 systemd-resolved[241]: Defaulting to hostname 'linux'. Sep 8 23:56:56.033798 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:56:56.039771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:56:56.107570 kernel: SCSI subsystem initialized Sep 8 23:56:56.116554 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:56:56.127550 kernel: iscsi: registered transport (tcp) Sep 8 23:56:56.148581 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:56:56.148609 kernel: QLogic iSCSI HBA Driver Sep 8 23:56:56.202196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:56:56.214664 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:56:56.241174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:56:56.241235 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:56:56.241253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:56:56.284558 kernel: raid6: avx2x4 gen() 25779 MB/s Sep 8 23:56:56.301536 kernel: raid6: avx2x2 gen() 28309 MB/s Sep 8 23:56:56.318695 kernel: raid6: avx2x1 gen() 24726 MB/s Sep 8 23:56:56.318763 kernel: raid6: using algorithm avx2x2 gen() 28309 MB/s Sep 8 23:56:56.336675 kernel: raid6: .... xor() 18878 MB/s, rmw enabled Sep 8 23:56:56.336697 kernel: raid6: using avx2x2 recovery algorithm Sep 8 23:56:56.359544 kernel: xor: automatically using best checksumming function avx Sep 8 23:56:56.526578 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:56:56.540959 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:56:56.551699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:56:56.568495 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 8 23:56:56.574769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:56:56.582741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:56:56.597359 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 8 23:56:56.634193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:56:56.654800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:56:56.729667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:56:56.739682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:56:56.752507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:56:56.755955 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:56:56.758351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:56:56.759510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:56:56.771054 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:56:56.772651 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 8 23:56:56.780698 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:56:56.783154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:56:56.795598 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:56:56.795674 kernel: GPT:9289727 != 19775487 Sep 8 23:56:56.795702 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:56:56.795725 kernel: GPT:9289727 != 19775487 Sep 8 23:56:56.795741 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:56:56.795758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:56.796542 kernel: cryptd: max_cpu_qlen set to 1000 Sep 8 23:56:56.807997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:56:56.809113 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:56.810911 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:56.818540 kernel: libata version 3.00 loaded. Sep 8 23:56:56.815577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:56:56.815757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:56.818670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:56.827693 kernel: AVX2 version of gcm_enc/dec engaged. Sep 8 23:56:56.827731 kernel: AES CTR mode by8 optimization enabled Sep 8 23:56:56.827835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:56.838579 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (471) Sep 8 23:56:56.842859 kernel: ahci 0000:00:1f.2: version 3.0 Sep 8 23:56:56.843120 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 8 23:56:56.847904 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 8 23:56:56.848100 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (476) Sep 8 23:56:56.848113 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 8 23:56:56.852545 kernel: scsi host0: ahci Sep 8 23:56:56.852810 kernel: scsi host1: ahci Sep 8 23:56:56.854836 kernel: scsi host2: ahci Sep 8 23:56:56.855028 kernel: scsi host3: ahci Sep 8 23:56:56.855285 kernel: scsi host4: ahci Sep 8 23:56:56.856542 kernel: scsi host5: ahci Sep 8 23:56:56.856829 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 8 23:56:56.856845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:56.865823 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 8 23:56:56.865854 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 8 23:56:56.865869 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 8 23:56:56.865883 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 8 23:56:56.865907 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 8 23:56:56.888439 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:56:56.900595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:56:56.913063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:56:56.922618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:56:56.925938 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:56:56.938643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:56:56.940965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:56:56.941023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:56.944705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:56.948241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:56.950530 disk-uuid[555]: Primary Header is updated. Sep 8 23:56:56.950530 disk-uuid[555]: Secondary Entries is updated. Sep 8 23:56:56.950530 disk-uuid[555]: Secondary Header is updated. Sep 8 23:56:56.951313 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:56:56.954883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:56.959546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:56.968806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:56.981684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:57.009253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:57.174850 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 8 23:56:57.174927 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 8 23:56:57.174939 kernel: ata3.00: applying bridge limits Sep 8 23:56:57.174964 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 8 23:56:57.174975 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 8 23:56:57.174986 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 8 23:56:57.176552 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 8 23:56:57.176634 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 8 23:56:57.177543 kernel: ata3.00: configured for UDMA/100 Sep 8 23:56:57.179536 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 8 23:56:57.228077 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 8 23:56:57.228332 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 8 23:56:57.242551 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 8 23:56:57.961558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:57.962019 disk-uuid[557]: The operation has completed successfully. Sep 8 23:56:58.000744 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:56:58.000870 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:56:58.042733 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:56:58.048077 sh[598]: Success Sep 8 23:56:58.061555 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 8 23:56:58.100237 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:56:58.118331 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:56:58.122335 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:56:58.133434 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 8 23:56:58.133469 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:56:58.133489 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:56:58.133501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:56:58.134759 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:56:58.140678 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:56:58.143123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:56:58.149717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:56:58.151701 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:56:58.170757 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:56:58.170814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:56:58.170831 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:58.173609 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:58.178538 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:56:58.257838 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:56:58.296854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:56:58.331137 systemd-networkd[774]: lo: Link UP Sep 8 23:56:58.331150 systemd-networkd[774]: lo: Gained carrier Sep 8 23:56:58.333023 systemd-networkd[774]: Enumeration completed Sep 8 23:56:58.333185 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:56:58.333457 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:58.333463 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:56:58.333631 systemd[1]: Reached target network.target - Network. Sep 8 23:56:58.334712 systemd-networkd[774]: eth0: Link UP Sep 8 23:56:58.334717 systemd-networkd[774]: eth0: Gained carrier Sep 8 23:56:58.334725 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:58.350644 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:56:58.453269 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:56:58.467669 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:56:58.518076 ignition[779]: Ignition 2.20.0 Sep 8 23:56:58.518091 ignition[779]: Stage: fetch-offline Sep 8 23:56:58.518141 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:58.518156 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:58.518294 ignition[779]: parsed url from cmdline: "" Sep 8 23:56:58.518300 ignition[779]: no config URL provided Sep 8 23:56:58.518307 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:56:58.518321 ignition[779]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:56:58.518355 ignition[779]: op(1): [started] loading QEMU firmware config module Sep 8 23:56:58.518362 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:56:58.527798 ignition[779]: op(1): [finished] loading QEMU firmware config module Sep 8 23:56:58.564638 ignition[779]: parsing config with SHA512: 9d171f494e7dc5f90cb4d51645f2c10242c938addbdacbfd77a5f3667b128972f1bb3f391182f88d97be746d81816d40899022383138e1fc85e0d75c3e243eec Sep 8 23:56:58.568462 unknown[779]: fetched base config from "system" Sep 8 23:56:58.568477 unknown[779]: fetched user config from "qemu" Sep 8 23:56:58.570288 ignition[779]: fetch-offline: fetch-offline passed Sep 8 23:56:58.570417 ignition[779]: Ignition finished successfully Sep 8 23:56:58.572841 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:56:58.575188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:56:58.581713 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:56:58.600082 ignition[788]: Ignition 2.20.0 Sep 8 23:56:58.600094 ignition[788]: Stage: kargs Sep 8 23:56:58.600280 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:58.600295 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:58.601379 ignition[788]: kargs: kargs passed Sep 8 23:56:58.601448 ignition[788]: Ignition finished successfully Sep 8 23:56:58.607583 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:56:58.618663 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:56:58.641986 ignition[797]: Ignition 2.20.0 Sep 8 23:56:58.641998 ignition[797]: Stage: disks Sep 8 23:56:58.642206 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:58.642223 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:58.643433 ignition[797]: disks: disks passed Sep 8 23:56:58.643482 ignition[797]: Ignition finished successfully Sep 8 23:56:58.649423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:56:58.650855 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:56:58.652816 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:56:58.654002 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:56:58.655923 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:56:58.656990 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:56:58.671665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:56:58.694812 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:56:58.723575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:56:58.738683 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:56:58.838542 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 8 23:56:58.838928 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:56:58.840097 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:56:58.848602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:56:58.850642 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:56:58.851395 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:56:58.851436 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:56:58.863399 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (816) Sep 8 23:56:58.863428 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:56:58.863443 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:56:58.863457 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:58.863471 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:58.851462 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:56:58.865145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:56:58.889486 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:56:58.892211 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:56:58.929396 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:56:58.934278 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:56:58.939013 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:56:58.943361 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:56:59.145903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:56:59.156617 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:56:59.159303 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:56:59.167262 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:56:59.168857 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:56:59.215004 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:56:59.226375 ignition[937]: INFO : Ignition 2.20.0 Sep 8 23:56:59.226375 ignition[937]: INFO : Stage: mount Sep 8 23:56:59.228140 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:59.228140 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:59.230696 ignition[937]: INFO : mount: mount passed Sep 8 23:56:59.231446 ignition[937]: INFO : Ignition finished successfully Sep 8 23:56:59.234227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:56:59.246673 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:56:59.255472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:56:59.268225 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (951) Sep 8 23:56:59.268275 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:56:59.268292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:56:59.269084 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:59.272557 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:59.273863 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:56:59.306144 ignition[968]: INFO : Ignition 2.20.0 Sep 8 23:56:59.306144 ignition[968]: INFO : Stage: files Sep 8 23:56:59.308288 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:59.308288 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:59.308288 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:56:59.308288 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:56:59.308288 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:56:59.315675 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:56:59.315675 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:56:59.315675 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:56:59.315137 unknown[968]: wrote ssh authorized keys file for user: core Sep 8 23:56:59.320949 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 8 23:56:59.320949 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 8 23:56:59.366275 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:56:59.413715 systemd-networkd[774]: eth0: Gained IPv6LL Sep 8 23:57:00.026618 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 8 23:57:00.026618 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:57:00.030586 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 8 23:57:00.257415 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:57:00.552965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:57:00.555334 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:57:00.567400 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:57:00.567400 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:57:00.570920 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:57:00.572652 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:57:00.575111 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:57:00.577464 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:57:00.579567 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 8 23:57:00.956551 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:57:01.812296 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 8 23:57:01.812296 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:57:01.816096 ignition[968]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:57:01.853179 ignition[968]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:57:01.859100 ignition[968]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:57:01.861204 ignition[968]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:57:01.861204 ignition[968]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:57:01.861204 ignition[968]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:57:01.861204 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:57:01.861204 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:57:01.861204 ignition[968]: INFO : files: files passed Sep 8 23:57:01.861204 ignition[968]: INFO : Ignition finished successfully Sep 8 23:57:01.872991 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:57:01.885759 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:57:01.887831 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:57:01.889592 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:57:01.889743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:57:01.897528 initrd-setup-root-after-ignition[997]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:57:01.900643 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:57:01.900643 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:57:01.903762 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:57:01.903954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:57:01.906757 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:57:01.915668 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:57:01.942711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:57:01.942836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:57:01.944996 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:57:01.945317 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:57:01.945866 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:57:01.946717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:57:01.970220 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:57:01.983711 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:57:01.995061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:57:01.995618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:57:01.995933 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:57:01.996242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:57:01.996368 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:57:01.997208 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:57:01.997541 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:57:01.997867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:57:01.998174 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:57:01.998495 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:57:01.998848 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:57:01.999148 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:57:01.999474 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:57:01.999806 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:57:02.000114 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:57:02.000411 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:57:02.000551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:57:02.001088 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:57:02.001406 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:57:02.001870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:57:02.001997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:57:02.002357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:57:02.002478 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:57:02.003164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:57:02.003286 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:57:02.003899 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:57:02.004126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:57:02.007568 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:57:02.008001 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:57:02.008300 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:57:02.008796 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:57:02.060528 ignition[1023]: INFO : Ignition 2.20.0 Sep 8 23:57:02.060528 ignition[1023]: INFO : Stage: umount Sep 8 23:57:02.060528 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:57:02.060528 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:57:02.060528 ignition[1023]: INFO : umount: umount passed Sep 8 23:57:02.060528 ignition[1023]: INFO : Ignition finished successfully Sep 8 23:57:02.008900 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:57:02.009300 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:57:02.009387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:57:02.009998 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:57:02.010114 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:57:02.010463 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:57:02.010612 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:57:02.011889 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:57:02.012149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:57:02.012310 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:57:02.013813 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:57:02.014065 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:57:02.014223 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:57:02.014685 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:57:02.014828 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:57:02.021215 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:57:02.021361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:57:02.040948 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:57:02.062165 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:57:02.062293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:57:02.064949 systemd[1]: Stopped target network.target - Network. Sep 8 23:57:02.066267 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:57:02.066336 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:57:02.068106 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:57:02.068160 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:57:02.070005 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:57:02.070064 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:57:02.072009 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:57:02.072063 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:57:02.074052 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:57:02.075919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:57:02.082696 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:57:02.082913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:57:02.087419 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:57:02.087698 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:57:02.087831 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:57:02.091476 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:57:02.092267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:57:02.092342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:57:02.105686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:57:02.107184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:57:02.107259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:57:02.109384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:57:02.109437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:57:02.111593 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:57:02.111644 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:57:02.113689 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:57:02.113741 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:57:02.116039 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:57:02.118991 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:57:02.119077 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:57:02.127881 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:57:02.128007 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:57:02.134420 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:57:02.134677 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:57:02.136189 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:57:02.136238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:57:02.138178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:57:02.138218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:57:02.140463 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:57:02.140529 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:57:02.142940 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:57:02.142989 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:57:02.145230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:57:02.145288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:57:02.157738 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:57:02.159036 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:57:02.159127 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:57:02.162485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:57:02.162591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:57:02.166958 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:57:02.167048 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:57:02.167508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:57:02.167694 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:57:02.311993 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:57:02.312166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:57:02.314316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:57:02.315977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:57:02.316061 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:57:02.328754 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:57:02.337706 systemd[1]: Switching root. Sep 8 23:57:02.369728 systemd-journald[194]: Journal stopped Sep 8 23:57:05.029073 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 8 23:57:05.029195 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:57:05.029216 kernel: SELinux: policy capability open_perms=1 Sep 8 23:57:05.029233 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:57:05.029252 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:57:05.029268 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:57:05.029285 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:57:05.029302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:57:05.029319 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:57:05.029336 kernel: audit: type=1403 audit(1757375823.004:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:57:05.029354 systemd[1]: Successfully loaded SELinux policy in 56.185ms. Sep 8 23:57:05.029416 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.104ms. Sep 8 23:57:05.029439 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:57:05.029458 systemd[1]: Detected virtualization kvm. Sep 8 23:57:05.029482 systemd[1]: Detected architecture x86-64. Sep 8 23:57:05.029507 systemd[1]: Detected first boot. Sep 8 23:57:05.029542 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:57:05.029560 zram_generator::config[1069]: No configuration found. Sep 8 23:57:05.029586 kernel: Guest personality initialized and is inactive Sep 8 23:57:05.029603 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 8 23:57:05.029620 kernel: Initialized host personality Sep 8 23:57:05.029641 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:57:05.029659 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:57:05.029678 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:57:05.029696 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:57:05.029713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:57:05.029732 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:57:05.029750 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:57:05.029768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:57:05.029791 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:57:05.029809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:57:05.029827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:57:05.029859 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:57:05.029877 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:57:05.029894 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:57:05.029913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:57:05.029931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:57:05.029949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:57:05.029972 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:57:05.029989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:57:05.030008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:57:05.030026 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 8 23:57:05.030044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:57:05.030062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:57:05.030080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:57:05.030097 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:57:05.030118 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:57:05.030136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:57:05.030165 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:57:05.030183 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:57:05.030200 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:57:05.030218 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:57:05.030236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:57:05.030268 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:57:05.030285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:57:05.030305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:57:05.030321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:57:05.030338 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:57:05.030352 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:57:05.030367 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:57:05.030381 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:57:05.030395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:57:05.030409 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:57:05.030424 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:57:05.030445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:57:05.030462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:57:05.030479 systemd[1]: Reached target machines.target - Containers. Sep 8 23:57:05.030506 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:57:05.030539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:57:05.030557 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:57:05.030574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:57:05.030590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:57:05.030611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:57:05.030629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:57:05.030657 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:57:05.030675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:57:05.030690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:57:05.030704 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:57:05.030719 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:57:05.030733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:57:05.030751 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:57:05.030765 kernel: loop: module loaded Sep 8 23:57:05.030780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:57:05.030794 kernel: fuse: init (API version 7.39) Sep 8 23:57:05.030808 kernel: ACPI: bus type drm_connector registered Sep 8 23:57:05.030821 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:57:05.030835 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:57:05.030850 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:57:05.030864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:57:05.030882 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:57:05.030895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:57:05.030952 systemd-journald[1144]: Collecting audit messages is disabled. Sep 8 23:57:05.030979 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:57:05.030996 systemd[1]: Stopped verity-setup.service. Sep 8 23:57:05.031010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:57:05.031035 systemd-journald[1144]: Journal started Sep 8 23:57:05.031066 systemd-journald[1144]: Runtime Journal (/run/log/journal/9d32e83acd68400fbe890514b4f12c77) is 6M, max 48.2M, 42.2M free. Sep 8 23:57:04.412853 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:57:04.433241 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:57:04.435644 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:57:05.037180 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:57:05.038762 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:57:05.040292 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:57:05.041941 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:57:05.043435 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:57:05.045136 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:57:05.046761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:57:05.048839 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:57:05.061196 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:57:05.064737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:57:05.065125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:57:05.069869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:57:05.070323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:57:05.074110 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:57:05.074829 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:57:05.077055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:57:05.077409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:57:05.086182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:57:05.086491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:57:05.088433 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:57:05.088815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:57:05.091051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:57:05.093096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:57:05.095359 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:57:05.097967 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:57:05.128382 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:57:05.140068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:57:05.157652 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:57:05.161733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:57:05.161808 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:57:05.169359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:57:05.182797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:57:05.188510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:57:05.192690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:57:05.196852 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:57:05.201715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:57:05.206752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:57:05.211686 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:57:05.213249 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:57:05.217459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:57:05.231377 systemd-journald[1144]: Time spent on flushing to /var/log/journal/9d32e83acd68400fbe890514b4f12c77 is 42.270ms for 1060 entries. Sep 8 23:57:05.231377 systemd-journald[1144]: System Journal (/var/log/journal/9d32e83acd68400fbe890514b4f12c77) is 8M, max 195.6M, 187.6M free. Sep 8 23:57:06.592356 systemd-journald[1144]: Received client request to flush runtime journal. Sep 8 23:57:06.592475 kernel: loop0: detected capacity change from 0 to 229808 Sep 8 23:57:06.592508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:57:05.227981 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:57:05.241281 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:57:05.246898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:57:05.248417 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:57:05.250956 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:57:05.274264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:57:05.298678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:57:05.347081 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:57:05.351005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:57:05.353079 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:57:05.359852 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:57:05.952771 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:57:05.961089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:57:06.601358 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:57:06.607466 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:57:06.623216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:57:06.624369 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:57:06.627892 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 8 23:57:06.627919 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 8 23:57:06.637622 kernel: loop1: detected capacity change from 0 to 147912 Sep 8 23:57:06.639226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:57:06.692549 kernel: loop2: detected capacity change from 0 to 138176 Sep 8 23:57:06.729586 kernel: loop3: detected capacity change from 0 to 229808 Sep 8 23:57:06.744586 kernel: loop4: detected capacity change from 0 to 147912 Sep 8 23:57:06.768570 kernel: loop5: detected capacity change from 0 to 138176 Sep 8 23:57:06.787487 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:57:06.788471 (sd-merge)[1213]: Merged extensions into '/usr'. Sep 8 23:57:06.794744 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:57:06.794773 systemd[1]: Reloading... Sep 8 23:57:06.862000 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:57:06.876575 zram_generator::config[1247]: No configuration found. Sep 8 23:57:07.036689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:07.132932 systemd[1]: Reloading finished in 337 ms. Sep 8 23:57:07.157753 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:57:07.159929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:57:07.161979 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:57:07.184464 systemd[1]: Starting ensure-sysext.service... Sep 8 23:57:07.187032 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:57:07.190027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:57:07.201910 systemd[1]: Reload requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:57:07.201928 systemd[1]: Reloading... Sep 8 23:57:07.225129 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:57:07.225649 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:57:07.227253 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:57:07.227742 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Sep 8 23:57:07.227882 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Sep 8 23:57:07.233802 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:57:07.233822 systemd-tmpfiles[1280]: Skipping /boot Sep 8 23:57:07.234931 systemd-udevd[1281]: Using default interface naming scheme 'v255'. Sep 8 23:57:07.254027 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:57:07.254047 systemd-tmpfiles[1280]: Skipping /boot Sep 8 23:57:07.276552 zram_generator::config[1313]: No configuration found. Sep 8 23:57:07.364542 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1351) Sep 8 23:57:07.403540 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 8 23:57:07.409579 kernel: ACPI: button: Power Button [PWRF] Sep 8 23:57:07.423159 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 8 23:57:07.423539 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 8 23:57:07.423729 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 8 23:57:07.425317 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 8 23:57:07.430546 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 8 23:57:07.447192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:07.521556 kernel: mousedev: PS/2 mouse device common for all mice Sep 8 23:57:07.549915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:57:07.551561 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 8 23:57:07.551848 systemd[1]: Reloading finished in 349 ms. Sep 8 23:57:07.559030 kernel: kvm_amd: TSC scaling supported Sep 8 23:57:07.559075 kernel: kvm_amd: Nested Virtualization enabled Sep 8 23:57:07.559098 kernel: kvm_amd: Nested Paging enabled Sep 8 23:57:07.559124 kernel: kvm_amd: LBR virtualization supported Sep 8 23:57:07.560866 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 8 23:57:07.560885 kernel: kvm_amd: Virtual GIF supported Sep 8 23:57:07.572511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:57:07.589398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:57:07.590844 kernel: EDAC MC: Ver: 3.0.0 Sep 8 23:57:07.617743 systemd[1]: Finished ensure-sysext.service. Sep 8 23:57:07.623084 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:57:07.649271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:57:07.663675 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:57:07.667009 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:57:07.668562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:57:07.669975 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:57:07.674668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:57:07.681783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:57:07.686775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:57:07.690765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:57:07.692608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:57:07.695391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:57:07.697850 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:57:07.696948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:57:07.698589 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:57:07.704894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:57:07.708610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:57:07.713805 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:57:07.719648 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:57:07.723153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:57:07.724337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:57:07.726223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:57:07.726760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:57:07.729125 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:57:07.729447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:57:07.735056 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:57:07.736989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:57:07.737211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:57:07.738974 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:57:07.739200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:57:07.746025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:57:07.748077 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:57:07.760810 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:57:07.770102 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:57:07.770319 augenrules[1425]: No rules Sep 8 23:57:07.771587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:57:07.771943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:57:07.774155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:57:07.779389 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:57:07.780643 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:57:07.783846 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:57:07.784344 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:57:07.800148 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:57:07.801498 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:57:07.813085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:57:07.815082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:57:07.817593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:57:07.822487 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:57:07.864037 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:57:07.956227 systemd-resolved[1398]: Positive Trust Anchors: Sep 8 23:57:07.956245 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:57:07.956276 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:57:07.959974 systemd-networkd[1396]: lo: Link UP Sep 8 23:57:07.959991 systemd-networkd[1396]: lo: Gained carrier Sep 8 23:57:07.960662 systemd-resolved[1398]: Defaulting to hostname 'linux'. Sep 8 23:57:07.962558 systemd-networkd[1396]: Enumeration completed Sep 8 23:57:07.962799 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:57:07.963688 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:57:07.963700 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:57:07.964482 systemd-networkd[1396]: eth0: Link UP Sep 8 23:57:07.964493 systemd-networkd[1396]: eth0: Gained carrier Sep 8 23:57:07.964507 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:57:07.964570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:57:07.966647 systemd[1]: Reached target network.target - Network. Sep 8 23:57:07.967999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:57:07.985870 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:57:07.989182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:57:07.990583 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:57:07.990859 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:57:07.992248 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Sep 8 23:57:07.992495 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:57:09.578347 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:57:09.578396 systemd-timesyncd[1399]: Initial clock synchronization to Mon 2025-09-08 23:57:09.578232 UTC. Sep 8 23:57:09.578640 systemd-resolved[1398]: Clock change detected. Flushing caches. Sep 8 23:57:09.579061 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:57:09.580745 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:57:09.582357 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:57:09.584120 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:57:09.584159 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:57:09.585212 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:57:09.586717 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:57:09.588206 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:57:09.589671 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:57:09.591901 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:57:09.595856 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:57:09.601022 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:57:09.602988 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:57:09.604599 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:57:09.610038 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:57:09.611957 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:57:09.614803 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:57:09.616576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:57:09.620107 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:57:09.621418 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:57:09.622644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:57:09.622681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:57:09.632764 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:57:09.636718 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:57:09.639874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:57:09.642925 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:57:09.644232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:57:09.648013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:57:09.654598 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:57:09.658780 jq[1457]: false Sep 8 23:57:09.658491 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:57:09.663146 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:57:09.673565 extend-filesystems[1458]: Found loop3 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found loop4 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found loop5 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found sr0 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda1 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda2 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda3 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found usr Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda4 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda6 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda7 Sep 8 23:57:09.673565 extend-filesystems[1458]: Found vda9 Sep 8 23:57:09.673565 extend-filesystems[1458]: Checking size of /dev/vda9 Sep 8 23:57:09.675219 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:57:09.699321 dbus-daemon[1456]: [system] SELinux support is enabled Sep 8 23:57:09.677857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:57:09.678596 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:57:09.704917 jq[1473]: true Sep 8 23:57:09.680797 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:57:09.685672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:57:09.692691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:57:09.693019 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:57:09.693756 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:57:09.694080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:57:09.695997 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:57:09.696993 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:57:09.701055 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:57:09.714086 jq[1478]: true Sep 8 23:57:09.719962 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:57:09.727813 update_engine[1471]: I20250908 23:57:09.724934 1471 main.cc:92] Flatcar Update Engine starting Sep 8 23:57:09.727813 update_engine[1471]: I20250908 23:57:09.726485 1471 update_check_scheduler.cc:74] Next update check in 11m16s Sep 8 23:57:09.727713 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:57:09.727776 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:57:09.729755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:57:09.729787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:57:09.740062 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:57:09.746140 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:57:09.774272 extend-filesystems[1458]: Resized partition /dev/vda9 Sep 8 23:57:09.778813 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:57:09.784227 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (Power Button) Sep 8 23:57:09.784263 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 8 23:57:09.785408 systemd-logind[1467]: New seat seat0. Sep 8 23:57:09.790269 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:57:09.798520 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1352) Sep 8 23:57:09.853759 tar[1477]: linux-amd64/LICENSE Sep 8 23:57:09.853759 tar[1477]: linux-amd64/helm Sep 8 23:57:09.883834 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:57:10.100579 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:57:10.176120 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:57:10.203503 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:57:10.205637 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:57:10.219004 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:57:10.227897 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:57:10.228173 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:57:10.231224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:57:10.237023 containerd[1481]: time="2025-09-08T23:57:10.235082710Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:57:10.237306 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:57:10.237306 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:57:10.237306 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:57:10.240281 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:57:10.241178 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Sep 8 23:57:10.242066 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:57:10.244388 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:57:10.244700 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:57:10.247993 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:57:10.255442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:57:10.265810 containerd[1481]: time="2025-09-08T23:57:10.265750614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.265816 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:57:10.268072 containerd[1481]: time="2025-09-08T23:57:10.267986257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:57:10.268072 containerd[1481]: time="2025-09-08T23:57:10.268029739Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:57:10.268072 containerd[1481]: time="2025-09-08T23:57:10.268052131Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:57:10.268251 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 8 23:57:10.269695 containerd[1481]: time="2025-09-08T23:57:10.268279748Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:57:10.269793 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:57:10.271597 containerd[1481]: time="2025-09-08T23:57:10.271572714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.271712 containerd[1481]: time="2025-09-08T23:57:10.271691197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:57:10.271712 containerd[1481]: time="2025-09-08T23:57:10.271708429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272013 containerd[1481]: time="2025-09-08T23:57:10.271984246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272013 containerd[1481]: time="2025-09-08T23:57:10.272004124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272071 containerd[1481]: time="2025-09-08T23:57:10.272018721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272071 containerd[1481]: time="2025-09-08T23:57:10.272029301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272158 containerd[1481]: time="2025-09-08T23:57:10.272139137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.272509 containerd[1481]: time="2025-09-08T23:57:10.272475969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:57:10.273024 containerd[1481]: time="2025-09-08T23:57:10.273000302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:57:10.273024 containerd[1481]: time="2025-09-08T23:57:10.273020790Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:57:10.273171 containerd[1481]: time="2025-09-08T23:57:10.273152337Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:57:10.273242 containerd[1481]: time="2025-09-08T23:57:10.273225525Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:57:10.280589 containerd[1481]: time="2025-09-08T23:57:10.280553700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:57:10.280625 containerd[1481]: time="2025-09-08T23:57:10.280598103Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:57:10.280625 containerd[1481]: time="2025-09-08T23:57:10.280613412Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:57:10.280663 containerd[1481]: time="2025-09-08T23:57:10.280628070Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:57:10.280663 containerd[1481]: time="2025-09-08T23:57:10.280642376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:57:10.280823 containerd[1481]: time="2025-09-08T23:57:10.280788040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:57:10.281041 containerd[1481]: time="2025-09-08T23:57:10.281016538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:57:10.281162 containerd[1481]: time="2025-09-08T23:57:10.281139168Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:57:10.281191 containerd[1481]: time="2025-09-08T23:57:10.281169886Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:57:10.281191 containerd[1481]: time="2025-09-08T23:57:10.281185756Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:57:10.281228 containerd[1481]: time="2025-09-08T23:57:10.281199852Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281228 containerd[1481]: time="2025-09-08T23:57:10.281213808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281228 containerd[1481]: time="2025-09-08T23:57:10.281225691Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281289 containerd[1481]: time="2025-09-08T23:57:10.281239987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281289 containerd[1481]: time="2025-09-08T23:57:10.281253793Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281289 containerd[1481]: time="2025-09-08T23:57:10.281265615Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281289 containerd[1481]: time="2025-09-08T23:57:10.281278049Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281289 containerd[1481]: time="2025-09-08T23:57:10.281289661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281322192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281337250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281349383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281361054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281373678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281386 containerd[1481]: time="2025-09-08T23:57:10.281386192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281399777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281431627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281444942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281458006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281470229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281483244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281494555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281508651Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:57:10.281551 containerd[1481]: time="2025-09-08T23:57:10.281527437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281570167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281582530Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281627755Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281641661Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281651509Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281662540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281673360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281684732Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281694881Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:57:10.281736 containerd[1481]: time="2025-09-08T23:57:10.281704649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:57:10.282025 containerd[1481]: time="2025-09-08T23:57:10.281976479Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:57:10.282025 containerd[1481]: time="2025-09-08T23:57:10.282023056Z" level=info msg="Connect containerd service" Sep 8 23:57:10.282260 containerd[1481]: time="2025-09-08T23:57:10.282079171Z" level=info msg="using legacy CRI server" Sep 8 23:57:10.282260 containerd[1481]: time="2025-09-08T23:57:10.282088168Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:57:10.282260 containerd[1481]: time="2025-09-08T23:57:10.282189338Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:57:10.282953 containerd[1481]: time="2025-09-08T23:57:10.282918325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:57:10.283140 containerd[1481]: time="2025-09-08T23:57:10.283084617Z" level=info msg="Start subscribing containerd event" Sep 8 23:57:10.283187 containerd[1481]: time="2025-09-08T23:57:10.283171250Z" level=info msg="Start recovering state" Sep 8 23:57:10.283231 containerd[1481]: time="2025-09-08T23:57:10.283211666Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283272139Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283285985Z" level=info msg="Start event monitor" Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283336219Z" level=info msg="Start snapshots syncer" Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283350566Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283359683Z" level=info msg="Start streaming server" Sep 8 23:57:10.283483 containerd[1481]: time="2025-09-08T23:57:10.283464760Z" level=info msg="containerd successfully booted in 0.074643s" Sep 8 23:57:10.283529 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:57:10.481351 tar[1477]: linux-amd64/README.md Sep 8 23:57:10.505944 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:57:10.866746 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:57:10.878869 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Sep 8 23:57:10.925089 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:10.927099 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:10.933872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:57:10.951740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:57:10.958934 systemd-logind[1467]: New session 1 of user core. Sep 8 23:57:10.965033 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:57:10.979797 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:57:10.984790 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:57:10.987829 systemd-logind[1467]: New session c1 of user core. Sep 8 23:57:11.135368 systemd[1551]: Queued start job for default target default.target. Sep 8 23:57:11.144843 systemd[1551]: Created slice app.slice - User Application Slice. Sep 8 23:57:11.144868 systemd[1551]: Reached target paths.target - Paths. Sep 8 23:57:11.144910 systemd[1551]: Reached target timers.target - Timers. Sep 8 23:57:11.146502 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:57:11.176496 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:57:11.176661 systemd[1551]: Reached target sockets.target - Sockets. Sep 8 23:57:11.176711 systemd[1551]: Reached target basic.target - Basic System. Sep 8 23:57:11.176757 systemd[1551]: Reached target default.target - Main User Target. Sep 8 23:57:11.176794 systemd[1551]: Startup finished in 181ms. Sep 8 23:57:11.177215 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:57:11.179999 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:57:11.251855 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:36268.service - OpenSSH per-connection server daemon (10.0.0.1:36268). Sep 8 23:57:11.289034 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 36268 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:11.291021 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:11.295515 systemd-logind[1467]: New session 2 of user core. Sep 8 23:57:11.304703 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:57:11.359384 sshd[1564]: Connection closed by 10.0.0.1 port 36268 Sep 8 23:57:11.359778 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:11.377599 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:36268.service: Deactivated successfully. Sep 8 23:57:11.380320 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:57:11.381964 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:57:11.393908 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:36278.service - OpenSSH per-connection server daemon (10.0.0.1:36278). Sep 8 23:57:11.396790 systemd-logind[1467]: Removed session 2. Sep 8 23:57:11.430142 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 36278 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:11.432270 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:11.437187 systemd-logind[1467]: New session 3 of user core. Sep 8 23:57:11.454857 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:57:11.494755 systemd-networkd[1396]: eth0: Gained IPv6LL Sep 8 23:57:11.498197 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:57:11.499995 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:57:11.513863 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:57:11.516746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:11.520242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:57:11.526977 sshd[1572]: Connection closed by 10.0.0.1 port 36278 Sep 8 23:57:11.527810 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:11.533911 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:36278.service: Deactivated successfully. Sep 8 23:57:11.536887 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:57:11.539375 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:57:11.539962 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:57:11.541992 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:57:11.544991 systemd-logind[1467]: Removed session 3. Sep 8 23:57:11.545006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:57:11.546760 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:57:12.889074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:12.890903 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:57:12.892251 systemd[1]: Startup finished in 1.202s (kernel) + 7.276s (initrd) + 8.356s (userspace) = 16.835s. Sep 8 23:57:12.893260 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:57:13.631832 kubelet[1600]: E0908 23:57:13.631689 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:57:13.636317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:57:13.636608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:57:13.637059 systemd[1]: kubelet.service: Consumed 1.968s CPU time, 271.5M memory peak. Sep 8 23:57:21.538370 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). Sep 8 23:57:21.576872 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:21.578522 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:21.583632 systemd-logind[1467]: New session 4 of user core. Sep 8 23:57:21.597761 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:57:21.652210 sshd[1615]: Connection closed by 10.0.0.1 port 45080 Sep 8 23:57:21.652598 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:21.664212 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:45080.service: Deactivated successfully. Sep 8 23:57:21.666013 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:57:21.667376 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:57:21.668663 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:45084.service - OpenSSH per-connection server daemon (10.0.0.1:45084). Sep 8 23:57:21.669398 systemd-logind[1467]: Removed session 4. Sep 8 23:57:21.711374 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 45084 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:21.712771 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:21.717097 systemd-logind[1467]: New session 5 of user core. Sep 8 23:57:21.727674 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:57:21.776755 sshd[1623]: Connection closed by 10.0.0.1 port 45084 Sep 8 23:57:21.777124 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:21.789735 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:45084.service: Deactivated successfully. Sep 8 23:57:21.791977 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:57:21.793501 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:57:21.808841 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:45088.service - OpenSSH per-connection server daemon (10.0.0.1:45088). Sep 8 23:57:21.809990 systemd-logind[1467]: Removed session 5. Sep 8 23:57:21.843033 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 45088 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:21.844562 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:21.848799 systemd-logind[1467]: New session 6 of user core. Sep 8 23:57:21.863670 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:57:21.916739 sshd[1631]: Connection closed by 10.0.0.1 port 45088 Sep 8 23:57:21.917158 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:21.934290 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:45088.service: Deactivated successfully. Sep 8 23:57:21.936435 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:57:21.938028 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:57:21.939644 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:45092.service - OpenSSH per-connection server daemon (10.0.0.1:45092). Sep 8 23:57:21.940599 systemd-logind[1467]: Removed session 6. Sep 8 23:57:21.977628 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 45092 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:21.979029 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:21.983460 systemd-logind[1467]: New session 7 of user core. Sep 8 23:57:21.993672 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:57:22.097687 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:57:22.098107 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:57:22.122330 sudo[1640]: pam_unix(sudo:session): session closed for user root Sep 8 23:57:22.123974 sshd[1639]: Connection closed by 10.0.0.1 port 45092 Sep 8 23:57:22.124453 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:22.138264 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:45092.service: Deactivated successfully. Sep 8 23:57:22.140351 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:57:22.141899 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:57:22.143456 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Sep 8 23:57:22.144432 systemd-logind[1467]: Removed session 7. Sep 8 23:57:22.192914 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:22.194649 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:22.199087 systemd-logind[1467]: New session 8 of user core. Sep 8 23:57:22.208685 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:57:22.263225 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:57:22.263585 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:57:22.267260 sudo[1650]: pam_unix(sudo:session): session closed for user root Sep 8 23:57:22.273642 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:57:22.274060 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:57:22.291862 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:57:22.321153 augenrules[1672]: No rules Sep 8 23:57:22.323019 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:57:22.323336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:57:22.324545 sudo[1649]: pam_unix(sudo:session): session closed for user root Sep 8 23:57:22.326049 sshd[1648]: Connection closed by 10.0.0.1 port 45106 Sep 8 23:57:22.326437 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:22.338433 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:45106.service: Deactivated successfully. Sep 8 23:57:22.340503 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:57:22.342004 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:57:22.351838 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Sep 8 23:57:22.353432 systemd-logind[1467]: Removed session 8. Sep 8 23:57:22.386067 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:57:22.387641 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:22.392583 systemd-logind[1467]: New session 9 of user core. Sep 8 23:57:22.402686 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:57:22.455707 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:57:22.456044 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:57:23.280793 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:57:23.280952 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:57:23.887009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:57:23.890161 dockerd[1704]: time="2025-09-08T23:57:23.889510032Z" level=info msg="Starting up" Sep 8 23:57:23.953857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:24.576751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:24.576972 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:57:24.591093 dockerd[1704]: time="2025-09-08T23:57:24.590988656Z" level=info msg="Loading containers: start." Sep 8 23:57:24.642150 kubelet[1734]: E0908 23:57:24.642054 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:57:24.649344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:57:24.649578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:57:24.649974 systemd[1]: kubelet.service: Consumed 603ms CPU time, 111.4M memory peak. Sep 8 23:57:24.803570 kernel: Initializing XFRM netlink socket Sep 8 23:57:24.906490 systemd-networkd[1396]: docker0: Link UP Sep 8 23:57:24.948701 dockerd[1704]: time="2025-09-08T23:57:24.948633355Z" level=info msg="Loading containers: done." Sep 8 23:57:24.968889 dockerd[1704]: time="2025-09-08T23:57:24.968823556Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:57:24.969071 dockerd[1704]: time="2025-09-08T23:57:24.968955924Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:57:24.969126 dockerd[1704]: time="2025-09-08T23:57:24.969107448Z" level=info msg="Daemon has completed initialization" Sep 8 23:57:25.009690 dockerd[1704]: time="2025-09-08T23:57:25.009608166Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:57:25.009826 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:57:26.046983 containerd[1481]: time="2025-09-08T23:57:26.046923704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 8 23:57:27.021293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394266702.mount: Deactivated successfully. Sep 8 23:57:28.266254 containerd[1481]: time="2025-09-08T23:57:28.266162612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:28.266753 containerd[1481]: time="2025-09-08T23:57:28.266622825Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 8 23:57:28.267882 containerd[1481]: time="2025-09-08T23:57:28.267845759Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:28.270666 containerd[1481]: time="2025-09-08T23:57:28.270617518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:28.271594 containerd[1481]: time="2025-09-08T23:57:28.271554485Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.22458176s" Sep 8 23:57:28.271594 containerd[1481]: time="2025-09-08T23:57:28.271591645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 8 23:57:28.272221 containerd[1481]: time="2025-09-08T23:57:28.272194736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 8 23:57:29.882730 containerd[1481]: time="2025-09-08T23:57:29.882652434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:29.883441 containerd[1481]: time="2025-09-08T23:57:29.883351816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 8 23:57:29.884624 containerd[1481]: time="2025-09-08T23:57:29.884581503Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:29.887457 containerd[1481]: time="2025-09-08T23:57:29.887409287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:29.888464 containerd[1481]: time="2025-09-08T23:57:29.888415144Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.616183529s" Sep 8 23:57:29.888506 containerd[1481]: time="2025-09-08T23:57:29.888464857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 8 23:57:29.889246 containerd[1481]: time="2025-09-08T23:57:29.888984191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 8 23:57:31.756835 containerd[1481]: time="2025-09-08T23:57:31.756751899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:31.758637 containerd[1481]: time="2025-09-08T23:57:31.758565150Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 8 23:57:31.760028 containerd[1481]: time="2025-09-08T23:57:31.759997397Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:31.767000 containerd[1481]: time="2025-09-08T23:57:31.766957372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:31.768274 containerd[1481]: time="2025-09-08T23:57:31.768212276Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.879188771s" Sep 8 23:57:31.768322 containerd[1481]: time="2025-09-08T23:57:31.768279502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 8 23:57:31.768941 containerd[1481]: time="2025-09-08T23:57:31.768905646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 8 23:57:33.131666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838495673.mount: Deactivated successfully. Sep 8 23:57:33.867240 containerd[1481]: time="2025-09-08T23:57:33.867174529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:33.867970 containerd[1481]: time="2025-09-08T23:57:33.867915960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 8 23:57:33.869205 containerd[1481]: time="2025-09-08T23:57:33.869155685Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:33.871414 containerd[1481]: time="2025-09-08T23:57:33.871363176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:33.872161 containerd[1481]: time="2025-09-08T23:57:33.872127259Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.103182249s" Sep 8 23:57:33.872161 containerd[1481]: time="2025-09-08T23:57:33.872156654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 8 23:57:33.872814 containerd[1481]: time="2025-09-08T23:57:33.872631785Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 8 23:57:34.775996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 8 23:57:34.783779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:34.791078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992984233.mount: Deactivated successfully. Sep 8 23:57:34.962880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:34.967763 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:57:35.189601 kubelet[2000]: E0908 23:57:35.188070 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:57:35.193250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:57:35.193459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:57:35.193903 systemd[1]: kubelet.service: Consumed 342ms CPU time, 110.9M memory peak. Sep 8 23:57:36.534080 containerd[1481]: time="2025-09-08T23:57:36.534003818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:36.534831 containerd[1481]: time="2025-09-08T23:57:36.534755999Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 8 23:57:36.536043 containerd[1481]: time="2025-09-08T23:57:36.536003409Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:36.539365 containerd[1481]: time="2025-09-08T23:57:36.539324188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:36.540748 containerd[1481]: time="2025-09-08T23:57:36.540696662Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.668036965s" Sep 8 23:57:36.540793 containerd[1481]: time="2025-09-08T23:57:36.540747748Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 8 23:57:36.541295 containerd[1481]: time="2025-09-08T23:57:36.541267854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:57:37.125618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200046101.mount: Deactivated successfully. Sep 8 23:57:37.131336 containerd[1481]: time="2025-09-08T23:57:37.131284747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:37.132143 containerd[1481]: time="2025-09-08T23:57:37.132094586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 8 23:57:37.133368 containerd[1481]: time="2025-09-08T23:57:37.133319824Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:37.135558 containerd[1481]: time="2025-09-08T23:57:37.135501967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:37.136455 containerd[1481]: time="2025-09-08T23:57:37.136436229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 595.137749ms" Sep 8 23:57:37.136507 containerd[1481]: time="2025-09-08T23:57:37.136460595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 8 23:57:37.136920 containerd[1481]: time="2025-09-08T23:57:37.136894880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 8 23:57:37.741496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388582668.mount: Deactivated successfully. Sep 8 23:57:40.843410 containerd[1481]: time="2025-09-08T23:57:40.843326496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:40.844360 containerd[1481]: time="2025-09-08T23:57:40.844314860Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 8 23:57:40.845824 containerd[1481]: time="2025-09-08T23:57:40.845780860Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:40.850328 containerd[1481]: time="2025-09-08T23:57:40.850266824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:40.851388 containerd[1481]: time="2025-09-08T23:57:40.851350447Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.71443058s" Sep 8 23:57:40.851388 containerd[1481]: time="2025-09-08T23:57:40.851382457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 8 23:57:44.578309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:44.578477 systemd[1]: kubelet.service: Consumed 342ms CPU time, 110.9M memory peak. Sep 8 23:57:44.592750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:44.619110 systemd[1]: Reload requested from client PID 2145 ('systemctl') (unit session-9.scope)... Sep 8 23:57:44.619127 systemd[1]: Reloading... Sep 8 23:57:44.747660 zram_generator::config[2200]: No configuration found. Sep 8 23:57:45.102354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:45.208560 systemd[1]: Reloading finished in 589 ms. Sep 8 23:57:45.260851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:45.264869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:45.265939 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:57:45.266258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:45.266307 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.2M memory peak. Sep 8 23:57:45.284869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:45.449730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:45.454777 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:57:45.494954 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:45.495456 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:57:45.495456 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:45.495657 kubelet[2239]: I0908 23:57:45.495566 2239 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:57:45.720267 kubelet[2239]: I0908 23:57:45.720140 2239 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:57:45.720429 kubelet[2239]: I0908 23:57:45.720397 2239 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:57:45.720979 kubelet[2239]: I0908 23:57:45.720952 2239 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:57:45.746609 kubelet[2239]: E0908 23:57:45.746529 2239 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:57:45.746922 kubelet[2239]: I0908 23:57:45.746875 2239 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:57:45.752158 kubelet[2239]: E0908 23:57:45.752112 2239 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:57:45.752233 kubelet[2239]: I0908 23:57:45.752163 2239 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:57:45.758379 kubelet[2239]: I0908 23:57:45.758362 2239 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:57:45.758713 kubelet[2239]: I0908 23:57:45.758675 2239 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:57:45.758887 kubelet[2239]: I0908 23:57:45.758701 2239 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:57:45.759003 kubelet[2239]: I0908 23:57:45.758895 2239 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:57:45.759003 kubelet[2239]: I0908 23:57:45.758905 2239 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:57:45.759055 kubelet[2239]: I0908 23:57:45.759046 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:45.760780 kubelet[2239]: I0908 23:57:45.760748 2239 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:57:45.760780 kubelet[2239]: I0908 23:57:45.760772 2239 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:57:45.760858 kubelet[2239]: I0908 23:57:45.760799 2239 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:57:45.760858 kubelet[2239]: I0908 23:57:45.760818 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:57:45.767698 kubelet[2239]: I0908 23:57:45.767098 2239 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:57:45.767698 kubelet[2239]: I0908 23:57:45.767602 2239 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:57:45.768730 kubelet[2239]: W0908 23:57:45.768704 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:57:45.769719 kubelet[2239]: E0908 23:57:45.769030 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:57:45.770020 kubelet[2239]: E0908 23:57:45.769764 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:57:45.772309 kubelet[2239]: I0908 23:57:45.772278 2239 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:57:45.772380 kubelet[2239]: I0908 23:57:45.772366 2239 server.go:1289] "Started kubelet" Sep 8 23:57:45.773008 kubelet[2239]: I0908 23:57:45.772570 2239 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:57:45.774985 kubelet[2239]: I0908 23:57:45.773578 2239 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:57:45.774985 kubelet[2239]: I0908 23:57:45.774164 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:57:45.775649 kubelet[2239]: I0908 23:57:45.775171 2239 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:57:45.775649 kubelet[2239]: I0908 23:57:45.775512 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:57:45.776158 kubelet[2239]: I0908 23:57:45.776136 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:57:45.777339 kubelet[2239]: E0908 23:57:45.776278 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637409f833fca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:57:45.772309671 +0000 UTC m=+0.312878417,LastTimestamp:2025-09-08 23:57:45.772309671 +0000 UTC m=+0.312878417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:57:45.777523 kubelet[2239]: E0908 23:57:45.777355 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:45.777523 kubelet[2239]: I0908 23:57:45.777387 2239 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:57:45.777605 kubelet[2239]: I0908 23:57:45.777570 2239 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:57:45.777684 kubelet[2239]: I0908 23:57:45.777667 2239 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:57:45.778058 kubelet[2239]: E0908 23:57:45.778029 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:57:45.778333 kubelet[2239]: I0908 23:57:45.778314 2239 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:57:45.778411 kubelet[2239]: I0908 23:57:45.778394 2239 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:57:45.779441 kubelet[2239]: E0908 23:57:45.779216 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Sep 8 23:57:45.779441 kubelet[2239]: E0908 23:57:45.779349 2239 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:57:45.780180 kubelet[2239]: I0908 23:57:45.780148 2239 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:57:45.798691 kubelet[2239]: I0908 23:57:45.798634 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:57:45.798740 kubelet[2239]: I0908 23:57:45.798697 2239 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:57:45.798740 kubelet[2239]: I0908 23:57:45.798712 2239 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:57:45.798740 kubelet[2239]: I0908 23:57:45.798733 2239 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:45.800379 kubelet[2239]: I0908 23:57:45.800345 2239 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:57:45.800379 kubelet[2239]: I0908 23:57:45.800374 2239 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:57:45.800522 kubelet[2239]: I0908 23:57:45.800402 2239 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:57:45.800522 kubelet[2239]: I0908 23:57:45.800415 2239 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:57:45.800522 kubelet[2239]: E0908 23:57:45.800483 2239 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:57:45.801394 kubelet[2239]: E0908 23:57:45.801359 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:57:45.878315 kubelet[2239]: E0908 23:57:45.878241 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:45.901633 kubelet[2239]: E0908 23:57:45.901583 2239 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:57:45.978527 kubelet[2239]: E0908 23:57:45.978427 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:45.979960 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.079400 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.102528 2239 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.179969 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.280899 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.380955 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.380979 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.481523 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.502756 2239 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.582310 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.707727 kubelet[2239]: E0908 23:57:46.603999 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:57:46.708385 kubelet[2239]: E0908 23:57:46.682731 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.744619 kubelet[2239]: I0908 23:57:46.744588 2239 policy_none.go:49] "None policy: Start" Sep 8 23:57:46.744686 kubelet[2239]: I0908 23:57:46.744641 2239 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:57:46.744686 kubelet[2239]: I0908 23:57:46.744670 2239 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:57:46.771602 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:57:46.785057 kubelet[2239]: E0908 23:57:46.783737 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:46.788668 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:57:46.801808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:57:46.813133 kubelet[2239]: E0908 23:57:46.812822 2239 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:57:46.813133 kubelet[2239]: I0908 23:57:46.813139 2239 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:57:46.813391 kubelet[2239]: I0908 23:57:46.813160 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:57:46.813608 kubelet[2239]: I0908 23:57:46.813472 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:57:46.814518 kubelet[2239]: E0908 23:57:46.814481 2239 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:57:46.814608 kubelet[2239]: E0908 23:57:46.814586 2239 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:57:46.871919 kubelet[2239]: E0908 23:57:46.871853 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:57:46.915031 kubelet[2239]: I0908 23:57:46.914967 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:46.915513 kubelet[2239]: E0908 23:57:46.915450 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:57:47.035738 kubelet[2239]: E0908 23:57:47.035470 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:57:47.117677 kubelet[2239]: I0908 23:57:47.117610 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:47.118162 kubelet[2239]: E0908 23:57:47.118125 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:57:47.182172 kubelet[2239]: E0908 23:57:47.182120 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" Sep 8 23:57:47.315577 kubelet[2239]: E0908 23:57:47.315283 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:57:47.317248 systemd[1]: Created slice kubepods-burstable-pod138f9e05645100ef0e19ea5fab0d522b.slice - libcontainer container kubepods-burstable-pod138f9e05645100ef0e19ea5fab0d522b.slice. Sep 8 23:57:47.342419 kubelet[2239]: E0908 23:57:47.342376 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:47.345760 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 8 23:57:47.356960 kubelet[2239]: E0908 23:57:47.356917 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:47.358639 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 8 23:57:47.360480 kubelet[2239]: E0908 23:57:47.360433 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:47.387964 kubelet[2239]: I0908 23:57:47.387892 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:47.387964 kubelet[2239]: I0908 23:57:47.387946 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:47.387964 kubelet[2239]: I0908 23:57:47.387962 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:47.387964 kubelet[2239]: I0908 23:57:47.387976 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:47.388198 kubelet[2239]: I0908 23:57:47.387997 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:47.388198 kubelet[2239]: I0908 23:57:47.388012 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:47.388198 kubelet[2239]: I0908 23:57:47.388026 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:47.388198 kubelet[2239]: I0908 23:57:47.388039 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:47.388198 kubelet[2239]: I0908 23:57:47.388054 2239 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:47.519979 kubelet[2239]: I0908 23:57:47.519921 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:47.520421 kubelet[2239]: E0908 23:57:47.520378 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:57:47.643159 kubelet[2239]: E0908 23:57:47.643023 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:47.643882 containerd[1481]: time="2025-09-08T23:57:47.643845614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:138f9e05645100ef0e19ea5fab0d522b,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:47.658116 kubelet[2239]: E0908 23:57:47.658084 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:47.658639 containerd[1481]: time="2025-09-08T23:57:47.658601363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:47.661855 kubelet[2239]: E0908 23:57:47.661834 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:47.662148 containerd[1481]: time="2025-09-08T23:57:47.662119701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:47.860471 kubelet[2239]: E0908 23:57:47.860414 2239 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:57:48.252322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301420210.mount: Deactivated successfully. Sep 8 23:57:48.258321 containerd[1481]: time="2025-09-08T23:57:48.258277257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:48.260388 containerd[1481]: time="2025-09-08T23:57:48.260317524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 8 23:57:48.263240 containerd[1481]: time="2025-09-08T23:57:48.263181273Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:48.264773 containerd[1481]: time="2025-09-08T23:57:48.264747295Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:48.265884 containerd[1481]: time="2025-09-08T23:57:48.265859810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:48.266744 containerd[1481]: time="2025-09-08T23:57:48.266708221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:57:48.267732 containerd[1481]: time="2025-09-08T23:57:48.267694775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:57:48.268574 containerd[1481]: time="2025-09-08T23:57:48.268546331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:48.269683 containerd[1481]: time="2025-09-08T23:57:48.269633448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.684346ms" Sep 8 23:57:48.273363 containerd[1481]: time="2025-09-08T23:57:48.273201333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.01744ms" Sep 8 23:57:48.276161 containerd[1481]: time="2025-09-08T23:57:48.276129396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.428082ms" Sep 8 23:57:48.342208 kubelet[2239]: I0908 23:57:48.341803 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:48.342208 kubelet[2239]: E0908 23:57:48.342160 2239 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:57:48.572238 containerd[1481]: time="2025-09-08T23:57:48.569888218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:48.572238 containerd[1481]: time="2025-09-08T23:57:48.571767829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:48.572238 containerd[1481]: time="2025-09-08T23:57:48.571795161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.572238 containerd[1481]: time="2025-09-08T23:57:48.571944025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.582586 containerd[1481]: time="2025-09-08T23:57:48.582413422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:48.583322 containerd[1481]: time="2025-09-08T23:57:48.583239279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:48.583546 containerd[1481]: time="2025-09-08T23:57:48.583482224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:48.583604 containerd[1481]: time="2025-09-08T23:57:48.583523793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.583712 containerd[1481]: time="2025-09-08T23:57:48.583665444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.584333 containerd[1481]: time="2025-09-08T23:57:48.584039759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:48.584333 containerd[1481]: time="2025-09-08T23:57:48.584060438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.584333 containerd[1481]: time="2025-09-08T23:57:48.584281750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:48.607560 systemd[1]: Started cri-containerd-af87f5ab6009d8345210dd834b9c0c8071ffcc2f7888bf53a99ce3d69898b991.scope - libcontainer container af87f5ab6009d8345210dd834b9c0c8071ffcc2f7888bf53a99ce3d69898b991. Sep 8 23:57:48.608902 kubelet[2239]: E0908 23:57:48.608869 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:57:48.637465 systemd[1]: Started cri-containerd-a82c0590219c357f9a530938e3942ab7231d7632bf7d0f389d299c416c2ad96d.scope - libcontainer container a82c0590219c357f9a530938e3942ab7231d7632bf7d0f389d299c416c2ad96d. Sep 8 23:57:48.705943 systemd[1]: Started cri-containerd-dd23ebb94a563d1caf99c16ad20d618ed33c61e08903c4dc40fc74aa329b6159.scope - libcontainer container dd23ebb94a563d1caf99c16ad20d618ed33c61e08903c4dc40fc74aa329b6159. Sep 8 23:57:48.747952 containerd[1481]: time="2025-09-08T23:57:48.747901983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"af87f5ab6009d8345210dd834b9c0c8071ffcc2f7888bf53a99ce3d69898b991\"" Sep 8 23:57:48.749492 kubelet[2239]: E0908 23:57:48.749314 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:48.755698 containerd[1481]: time="2025-09-08T23:57:48.755660552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82c0590219c357f9a530938e3942ab7231d7632bf7d0f389d299c416c2ad96d\"" Sep 8 23:57:48.756239 kubelet[2239]: E0908 23:57:48.756207 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:48.765757 containerd[1481]: time="2025-09-08T23:57:48.765720527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:138f9e05645100ef0e19ea5fab0d522b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd23ebb94a563d1caf99c16ad20d618ed33c61e08903c4dc40fc74aa329b6159\"" Sep 8 23:57:48.766525 kubelet[2239]: E0908 23:57:48.766481 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:48.782894 kubelet[2239]: E0908 23:57:48.782850 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="3.2s" Sep 8 23:57:48.880806 containerd[1481]: time="2025-09-08T23:57:48.880316220Z" level=info msg="CreateContainer within sandbox \"af87f5ab6009d8345210dd834b9c0c8071ffcc2f7888bf53a99ce3d69898b991\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:57:49.031033 containerd[1481]: time="2025-09-08T23:57:49.030932091Z" level=info msg="CreateContainer within sandbox \"a82c0590219c357f9a530938e3942ab7231d7632bf7d0f389d299c416c2ad96d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:57:49.103324 containerd[1481]: time="2025-09-08T23:57:49.103190012Z" level=info msg="CreateContainer within sandbox \"dd23ebb94a563d1caf99c16ad20d618ed33c61e08903c4dc40fc74aa329b6159\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:57:49.129193 containerd[1481]: time="2025-09-08T23:57:49.129116307Z" level=info msg="CreateContainer within sandbox \"a82c0590219c357f9a530938e3942ab7231d7632bf7d0f389d299c416c2ad96d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2750083cbfdaaa781b418b6cc1404563339b45aaa9899224ec0493a98e89fc4c\"" Sep 8 23:57:49.130033 containerd[1481]: time="2025-09-08T23:57:49.130004201Z" level=info msg="StartContainer for \"2750083cbfdaaa781b418b6cc1404563339b45aaa9899224ec0493a98e89fc4c\"" Sep 8 23:57:49.133480 containerd[1481]: time="2025-09-08T23:57:49.133350626Z" level=info msg="CreateContainer within sandbox \"af87f5ab6009d8345210dd834b9c0c8071ffcc2f7888bf53a99ce3d69898b991\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c90f8b024c41e2f49dc6dde2b78958a7a1066960b5ed44ae15f0afff2720f708\"" Sep 8 23:57:49.133914 containerd[1481]: time="2025-09-08T23:57:49.133882521Z" level=info msg="StartContainer for \"c90f8b024c41e2f49dc6dde2b78958a7a1066960b5ed44ae15f0afff2720f708\"" Sep 8 23:57:49.134966 containerd[1481]: time="2025-09-08T23:57:49.134923918Z" level=info msg="CreateContainer within sandbox \"dd23ebb94a563d1caf99c16ad20d618ed33c61e08903c4dc40fc74aa329b6159\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f40385aa84ccda8167918878321bda92071b678b718eff010406089969fe2f8a\"" Sep 8 23:57:49.135522 containerd[1481]: time="2025-09-08T23:57:49.135492182Z" level=info msg="StartContainer for \"f40385aa84ccda8167918878321bda92071b678b718eff010406089969fe2f8a\"" Sep 8 23:57:49.163713 systemd[1]: Started cri-containerd-2750083cbfdaaa781b418b6cc1404563339b45aaa9899224ec0493a98e89fc4c.scope - libcontainer container 2750083cbfdaaa781b418b6cc1404563339b45aaa9899224ec0493a98e89fc4c. Sep 8 23:57:49.168707 systemd[1]: Started cri-containerd-c90f8b024c41e2f49dc6dde2b78958a7a1066960b5ed44ae15f0afff2720f708.scope - libcontainer container c90f8b024c41e2f49dc6dde2b78958a7a1066960b5ed44ae15f0afff2720f708. Sep 8 23:57:49.170749 systemd[1]: Started cri-containerd-f40385aa84ccda8167918878321bda92071b678b718eff010406089969fe2f8a.scope - libcontainer container f40385aa84ccda8167918878321bda92071b678b718eff010406089969fe2f8a. Sep 8 23:57:49.241418 kubelet[2239]: E0908 23:57:49.241337 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:57:49.250141 kubelet[2239]: E0908 23:57:49.250059 2239 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:57:49.273886 containerd[1481]: time="2025-09-08T23:57:49.273824356Z" level=info msg="StartContainer for \"c90f8b024c41e2f49dc6dde2b78958a7a1066960b5ed44ae15f0afff2720f708\" returns successfully" Sep 8 23:57:49.274043 containerd[1481]: time="2025-09-08T23:57:49.274012665Z" level=info msg="StartContainer for \"2750083cbfdaaa781b418b6cc1404563339b45aaa9899224ec0493a98e89fc4c\" returns successfully" Sep 8 23:57:49.274043 containerd[1481]: time="2025-09-08T23:57:49.274038104Z" level=info msg="StartContainer for \"f40385aa84ccda8167918878321bda92071b678b718eff010406089969fe2f8a\" returns successfully" Sep 8 23:57:49.815667 kubelet[2239]: E0908 23:57:49.815437 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:49.815667 kubelet[2239]: E0908 23:57:49.815584 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:49.819935 kubelet[2239]: E0908 23:57:49.819798 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:49.819935 kubelet[2239]: E0908 23:57:49.819888 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:49.820385 kubelet[2239]: E0908 23:57:49.820309 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:49.820517 kubelet[2239]: E0908 23:57:49.820475 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:49.943949 kubelet[2239]: I0908 23:57:49.943828 2239 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:50.584887 kubelet[2239]: I0908 23:57:50.584834 2239 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:57:50.584887 kubelet[2239]: E0908 23:57:50.584871 2239 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:57:50.591829 kubelet[2239]: E0908 23:57:50.591795 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:50.692670 kubelet[2239]: E0908 23:57:50.692174 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:50.792569 kubelet[2239]: E0908 23:57:50.792484 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:50.821671 kubelet[2239]: E0908 23:57:50.821638 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:50.821815 kubelet[2239]: E0908 23:57:50.821752 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:50.821890 kubelet[2239]: E0908 23:57:50.821871 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:50.822060 kubelet[2239]: E0908 23:57:50.822034 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:50.822147 kubelet[2239]: E0908 23:57:50.822119 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:50.823283 kubelet[2239]: E0908 23:57:50.823248 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:50.893601 kubelet[2239]: E0908 23:57:50.893499 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:50.994382 kubelet[2239]: E0908 23:57:50.994316 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.095163 kubelet[2239]: E0908 23:57:51.095114 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.196009 kubelet[2239]: E0908 23:57:51.195949 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.296308 kubelet[2239]: E0908 23:57:51.296242 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.396972 kubelet[2239]: E0908 23:57:51.396911 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.497775 kubelet[2239]: E0908 23:57:51.497623 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.598387 kubelet[2239]: E0908 23:57:51.598342 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.699521 kubelet[2239]: E0908 23:57:51.699466 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.799913 kubelet[2239]: E0908 23:57:51.799781 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:51.822952 kubelet[2239]: E0908 23:57:51.822915 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:51.823147 kubelet[2239]: E0908 23:57:51.823017 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:51.823147 kubelet[2239]: E0908 23:57:51.823038 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:51.823147 kubelet[2239]: E0908 23:57:51.823112 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:51.900519 kubelet[2239]: E0908 23:57:51.900460 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.001555 kubelet[2239]: E0908 23:57:52.001474 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.102564 kubelet[2239]: E0908 23:57:52.102414 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.203318 kubelet[2239]: E0908 23:57:52.203269 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.304316 kubelet[2239]: E0908 23:57:52.304248 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.405089 kubelet[2239]: E0908 23:57:52.404941 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.505738 kubelet[2239]: E0908 23:57:52.505675 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.606435 kubelet[2239]: E0908 23:57:52.606390 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.707363 kubelet[2239]: E0908 23:57:52.707335 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.808015 kubelet[2239]: E0908 23:57:52.807952 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:52.908304 kubelet[2239]: E0908 23:57:52.908233 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.009173 kubelet[2239]: E0908 23:57:53.009007 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.110084 kubelet[2239]: E0908 23:57:53.110013 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.210473 kubelet[2239]: E0908 23:57:53.210388 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.311212 kubelet[2239]: E0908 23:57:53.310998 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.412043 kubelet[2239]: E0908 23:57:53.411953 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.513101 kubelet[2239]: E0908 23:57:53.513010 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.614406 kubelet[2239]: E0908 23:57:53.614140 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.714513 kubelet[2239]: E0908 23:57:53.714438 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.815114 kubelet[2239]: E0908 23:57:53.815053 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:53.916153 kubelet[2239]: E0908 23:57:53.916106 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.016974 kubelet[2239]: E0908 23:57:54.016921 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.117806 kubelet[2239]: E0908 23:57:54.117743 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.219041 kubelet[2239]: E0908 23:57:54.218887 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.224792 kubelet[2239]: E0908 23:57:54.224766 2239 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:57:54.224959 kubelet[2239]: E0908 23:57:54.224941 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:54.320073 kubelet[2239]: E0908 23:57:54.320017 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.420763 kubelet[2239]: E0908 23:57:54.420698 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.521518 kubelet[2239]: E0908 23:57:54.521331 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.622262 kubelet[2239]: E0908 23:57:54.622179 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.723230 kubelet[2239]: E0908 23:57:54.723186 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.823906 kubelet[2239]: E0908 23:57:54.823771 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:54.924572 kubelet[2239]: E0908 23:57:54.924521 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.025238 kubelet[2239]: E0908 23:57:55.025180 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.058318 update_engine[1471]: I20250908 23:57:55.058230 1471 update_attempter.cc:509] Updating boot flags... Sep 8 23:57:55.125882 kubelet[2239]: E0908 23:57:55.125766 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.226609 kubelet[2239]: E0908 23:57:55.226551 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.286798 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2536) Sep 8 23:57:55.326972 kubelet[2239]: E0908 23:57:55.326927 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.338667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2538) Sep 8 23:57:55.401509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2538) Sep 8 23:57:55.429698 kubelet[2239]: E0908 23:57:55.429647 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.530800 kubelet[2239]: E0908 23:57:55.530731 2239 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:55.679476 kubelet[2239]: I0908 23:57:55.679410 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:55.770248 kubelet[2239]: I0908 23:57:55.770125 2239 apiserver.go:52] "Watching apiserver" Sep 8 23:57:55.778263 kubelet[2239]: I0908 23:57:55.778218 2239 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:57:56.173201 kubelet[2239]: I0908 23:57:56.173167 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:56.197837 kubelet[2239]: I0908 23:57:56.197773 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:56.417250 kubelet[2239]: E0908 23:57:56.417030 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:56.417250 kubelet[2239]: E0908 23:57:56.417032 2239 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:56.417250 kubelet[2239]: E0908 23:57:56.417192 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:56.417571 kubelet[2239]: I0908 23:57:56.417525 2239 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:56.830937 kubelet[2239]: E0908 23:57:56.830898 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:56.831423 kubelet[2239]: E0908 23:57:56.830957 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:56.866526 kubelet[2239]: E0908 23:57:56.866480 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:57.888904 systemd[1]: Reload requested from client PID 2546 ('systemctl') (unit session-9.scope)... Sep 8 23:57:57.888919 systemd[1]: Reloading... Sep 8 23:57:57.982589 zram_generator::config[2593]: No configuration found. Sep 8 23:57:58.101564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:58.239683 systemd[1]: Reloading finished in 350 ms. Sep 8 23:57:58.270054 kubelet[2239]: I0908 23:57:58.269961 2239 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:57:58.270275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:58.298513 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:57:58.298985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:58.299061 systemd[1]: kubelet.service: Consumed 980ms CPU time, 132.8M memory peak. Sep 8 23:57:58.309925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:58.533062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:58.537902 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:57:58.580489 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:58.580489 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:57:58.580489 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:58.580975 kubelet[2635]: I0908 23:57:58.580566 2635 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:57:58.588019 kubelet[2635]: I0908 23:57:58.587977 2635 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:57:58.588019 kubelet[2635]: I0908 23:57:58.588014 2635 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:57:58.588843 kubelet[2635]: I0908 23:57:58.588808 2635 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:57:58.590126 kubelet[2635]: I0908 23:57:58.590097 2635 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 8 23:57:58.593815 kubelet[2635]: I0908 23:57:58.593410 2635 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:57:58.596838 kubelet[2635]: E0908 23:57:58.596787 2635 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:57:58.596963 kubelet[2635]: I0908 23:57:58.596936 2635 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:57:58.604636 kubelet[2635]: I0908 23:57:58.604615 2635 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:57:58.605028 kubelet[2635]: I0908 23:57:58.604991 2635 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:57:58.605190 kubelet[2635]: I0908 23:57:58.605019 2635 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:57:58.605277 kubelet[2635]: I0908 23:57:58.605197 2635 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:57:58.605277 kubelet[2635]: I0908 23:57:58.605207 2635 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:57:58.605277 kubelet[2635]: I0908 23:57:58.605272 2635 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:58.605478 kubelet[2635]: I0908 23:57:58.605456 2635 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:57:58.605516 kubelet[2635]: I0908 23:57:58.605488 2635 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:57:58.605516 kubelet[2635]: I0908 23:57:58.605512 2635 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:57:58.605600 kubelet[2635]: I0908 23:57:58.605527 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:57:58.609685 kubelet[2635]: I0908 23:57:58.609638 2635 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:57:58.610903 kubelet[2635]: I0908 23:57:58.610483 2635 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:57:58.616997 kubelet[2635]: I0908 23:57:58.615796 2635 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:57:58.616997 kubelet[2635]: I0908 23:57:58.615902 2635 server.go:1289] "Started kubelet" Sep 8 23:57:58.616997 kubelet[2635]: I0908 23:57:58.616115 2635 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:57:58.616997 kubelet[2635]: I0908 23:57:58.616306 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:57:58.618688 kubelet[2635]: I0908 23:57:58.617221 2635 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:57:58.618688 kubelet[2635]: I0908 23:57:58.618369 2635 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:57:58.621622 kubelet[2635]: I0908 23:57:58.621601 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:57:58.622286 kubelet[2635]: I0908 23:57:58.622257 2635 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:57:58.622449 kubelet[2635]: I0908 23:57:58.622393 2635 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:57:58.622479 kubelet[2635]: I0908 23:57:58.622461 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:57:58.622569 kubelet[2635]: I0908 23:57:58.622517 2635 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:57:58.624245 kubelet[2635]: I0908 23:57:58.624207 2635 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:57:58.624344 kubelet[2635]: I0908 23:57:58.624321 2635 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:57:58.625849 kubelet[2635]: E0908 23:57:58.625825 2635 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:57:58.626469 kubelet[2635]: I0908 23:57:58.626434 2635 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:57:58.637670 kubelet[2635]: I0908 23:57:58.637615 2635 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:57:58.639009 kubelet[2635]: I0908 23:57:58.638985 2635 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:57:58.639009 kubelet[2635]: I0908 23:57:58.639005 2635 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:57:58.639101 kubelet[2635]: I0908 23:57:58.639028 2635 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:57:58.639101 kubelet[2635]: I0908 23:57:58.639036 2635 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:57:58.639101 kubelet[2635]: E0908 23:57:58.639078 2635 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:57:58.665007 kubelet[2635]: I0908 23:57:58.664976 2635 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:57:58.665214 kubelet[2635]: I0908 23:57:58.665172 2635 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:57:58.665214 kubelet[2635]: I0908 23:57:58.665200 2635 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:58.665392 kubelet[2635]: I0908 23:57:58.665327 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:57:58.665392 kubelet[2635]: I0908 23:57:58.665337 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:57:58.665392 kubelet[2635]: I0908 23:57:58.665352 2635 policy_none.go:49] "None policy: Start" Sep 8 23:57:58.665392 kubelet[2635]: I0908 23:57:58.665362 2635 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:57:58.665392 kubelet[2635]: I0908 23:57:58.665372 2635 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:57:58.665623 kubelet[2635]: I0908 23:57:58.665451 2635 state_mem.go:75] "Updated machine memory state" Sep 8 23:57:58.669603 kubelet[2635]: E0908 23:57:58.669565 2635 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:57:58.669759 kubelet[2635]: I0908 23:57:58.669741 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:57:58.669826 kubelet[2635]: I0908 23:57:58.669759 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:57:58.669965 kubelet[2635]: I0908 23:57:58.669950 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:57:58.673516 kubelet[2635]: E0908 23:57:58.670863 2635 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:57:58.741468 kubelet[2635]: I0908 23:57:58.740942 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:58.741468 kubelet[2635]: I0908 23:57:58.741290 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:58.741680 kubelet[2635]: I0908 23:57:58.741647 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.768766 kubelet[2635]: E0908 23:57:58.768131 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:58.769039 kubelet[2635]: E0908 23:57:58.768415 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.769039 kubelet[2635]: E0908 23:57:58.768720 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:58.777871 kubelet[2635]: I0908 23:57:58.777820 2635 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:57:58.788482 kubelet[2635]: I0908 23:57:58.788318 2635 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:57:58.788482 kubelet[2635]: I0908 23:57:58.788430 2635 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:57:58.832801 sudo[2674]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:57:58.833172 sudo[2674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:57:58.923054 kubelet[2635]: I0908 23:57:58.923002 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:58.923054 kubelet[2635]: I0908 23:57:58.923042 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:58.923054 kubelet[2635]: I0908 23:57:58.923058 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:58.923288 kubelet[2635]: I0908 23:57:58.923081 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.923288 kubelet[2635]: I0908 23:57:58.923096 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.923288 kubelet[2635]: I0908 23:57:58.923121 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.923288 kubelet[2635]: I0908 23:57:58.923222 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138f9e05645100ef0e19ea5fab0d522b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"138f9e05645100ef0e19ea5fab0d522b\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:58.923288 kubelet[2635]: I0908 23:57:58.923236 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:58.923452 kubelet[2635]: I0908 23:57:58.923305 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:59.069863 kubelet[2635]: E0908 23:57:59.069524 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:59.069863 kubelet[2635]: E0908 23:57:59.069521 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:59.069863 kubelet[2635]: E0908 23:57:59.069702 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:59.342275 sudo[2674]: pam_unix(sudo:session): session closed for user root Sep 8 23:57:59.607250 kubelet[2635]: I0908 23:57:59.607013 2635 apiserver.go:52] "Watching apiserver" Sep 8 23:57:59.623521 kubelet[2635]: I0908 23:57:59.623492 2635 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:57:59.650781 kubelet[2635]: E0908 23:57:59.650748 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:59.651517 kubelet[2635]: I0908 23:57:59.651490 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:59.651627 kubelet[2635]: E0908 23:57:59.651609 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:00.271580 kubelet[2635]: E0908 23:58:00.271465 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:58:00.273748 kubelet[2635]: E0908 23:58:00.271687 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:00.286553 kubelet[2635]: I0908 23:58:00.286216 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.286199027 podStartE2EDuration="4.286199027s" podCreationTimestamp="2025-09-08 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:00.276462519 +0000 UTC m=+1.734287493" watchObservedRunningTime="2025-09-08 23:58:00.286199027 +0000 UTC m=+1.744024001" Sep 8 23:58:00.286553 kubelet[2635]: I0908 23:58:00.286451 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.286444812 podStartE2EDuration="4.286444812s" podCreationTimestamp="2025-09-08 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:00.285923225 +0000 UTC m=+1.743748199" watchObservedRunningTime="2025-09-08 23:58:00.286444812 +0000 UTC m=+1.744269786" Sep 8 23:58:00.307710 kubelet[2635]: I0908 23:58:00.307408 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.307390832 podStartE2EDuration="5.307390832s" podCreationTimestamp="2025-09-08 23:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:00.307240407 +0000 UTC m=+1.765065381" watchObservedRunningTime="2025-09-08 23:58:00.307390832 +0000 UTC m=+1.765215806" Sep 8 23:58:00.652307 kubelet[2635]: E0908 23:58:00.652174 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:00.652307 kubelet[2635]: E0908 23:58:00.652222 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:01.100313 sudo[1684]: pam_unix(sudo:session): session closed for user root Sep 8 23:58:01.101668 sshd[1683]: Connection closed by 10.0.0.1 port 45122 Sep 8 23:58:01.102116 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:01.105839 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:45122.service: Deactivated successfully. Sep 8 23:58:01.108110 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:58:01.108318 systemd[1]: session-9.scope: Consumed 6.601s CPU time, 249.8M memory peak. Sep 8 23:58:01.109638 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:58:01.110595 systemd-logind[1467]: Removed session 9. Sep 8 23:58:01.919814 kubelet[2635]: E0908 23:58:01.919775 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:02.516441 kubelet[2635]: I0908 23:58:02.516409 2635 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:58:02.516801 containerd[1481]: time="2025-09-08T23:58:02.516752278Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:58:02.517271 kubelet[2635]: I0908 23:58:02.516937 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:58:02.654792 kubelet[2635]: E0908 23:58:02.654755 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:03.723744 kubelet[2635]: E0908 23:58:03.723667 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:03.724183 kubelet[2635]: E0908 23:58:03.723718 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:03.733853 systemd[1]: Created slice kubepods-besteffort-pod273418a4_2f3c_48a3_8658_e3ed0200cad0.slice - libcontainer container kubepods-besteffort-pod273418a4_2f3c_48a3_8658_e3ed0200cad0.slice. Sep 8 23:58:03.748431 kubelet[2635]: I0908 23:58:03.748350 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/273418a4-2f3c-48a3-8658-e3ed0200cad0-lib-modules\") pod \"kube-proxy-2wkb5\" (UID: \"273418a4-2f3c-48a3-8658-e3ed0200cad0\") " pod="kube-system/kube-proxy-2wkb5" Sep 8 23:58:03.748431 kubelet[2635]: I0908 23:58:03.748426 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-bpf-maps\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748641 kubelet[2635]: I0908 23:58:03.748475 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cni-path\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748641 kubelet[2635]: I0908 23:58:03.748504 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-net\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748641 kubelet[2635]: I0908 23:58:03.748525 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-kernel\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748641 kubelet[2635]: I0908 23:58:03.748569 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hubble-tls\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748641 kubelet[2635]: I0908 23:58:03.748603 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/273418a4-2f3c-48a3-8658-e3ed0200cad0-kube-proxy\") pod \"kube-proxy-2wkb5\" (UID: \"273418a4-2f3c-48a3-8658-e3ed0200cad0\") " pod="kube-system/kube-proxy-2wkb5" Sep 8 23:58:03.748797 kubelet[2635]: I0908 23:58:03.748641 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cft42\" (UniqueName: \"kubernetes.io/projected/273418a4-2f3c-48a3-8658-e3ed0200cad0-kube-api-access-cft42\") pod \"kube-proxy-2wkb5\" (UID: \"273418a4-2f3c-48a3-8658-e3ed0200cad0\") " pod="kube-system/kube-proxy-2wkb5" Sep 8 23:58:03.748797 kubelet[2635]: I0908 23:58:03.748735 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-run\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748797 kubelet[2635]: I0908 23:58:03.748767 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-cgroup\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748882 kubelet[2635]: I0908 23:58:03.748797 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-etc-cni-netd\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748882 kubelet[2635]: I0908 23:58:03.748827 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2ee2c01-0f71-4999-820c-16f2c0d07c14-clustermesh-secrets\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748882 kubelet[2635]: I0908 23:58:03.748864 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/273418a4-2f3c-48a3-8658-e3ed0200cad0-xtables-lock\") pod \"kube-proxy-2wkb5\" (UID: \"273418a4-2f3c-48a3-8658-e3ed0200cad0\") " pod="kube-system/kube-proxy-2wkb5" Sep 8 23:58:03.748949 kubelet[2635]: I0908 23:58:03.748886 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hostproc\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748949 kubelet[2635]: I0908 23:58:03.748908 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-lib-modules\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.748949 kubelet[2635]: I0908 23:58:03.748930 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-xtables-lock\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.749019 kubelet[2635]: I0908 23:58:03.748960 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-config-path\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.749019 kubelet[2635]: I0908 23:58:03.748984 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-kube-api-access-zhrlx\") pod \"cilium-gwn48\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " pod="kube-system/cilium-gwn48" Sep 8 23:58:03.754459 systemd[1]: Created slice kubepods-burstable-podd2ee2c01_0f71_4999_820c_16f2c0d07c14.slice - libcontainer container kubepods-burstable-podd2ee2c01_0f71_4999_820c_16f2c0d07c14.slice. Sep 8 23:58:03.765041 systemd[1]: Created slice kubepods-besteffort-pod536e3284_6a59_4693_959a_966e4803b7db.slice - libcontainer container kubepods-besteffort-pod536e3284_6a59_4693_959a_966e4803b7db.slice. Sep 8 23:58:03.851208 kubelet[2635]: I0908 23:58:03.850200 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6zb\" (UniqueName: \"kubernetes.io/projected/536e3284-6a59-4693-959a-966e4803b7db-kube-api-access-gr6zb\") pod \"cilium-operator-6c4d7847fc-cg6v4\" (UID: \"536e3284-6a59-4693-959a-966e4803b7db\") " pod="kube-system/cilium-operator-6c4d7847fc-cg6v4" Sep 8 23:58:03.851208 kubelet[2635]: I0908 23:58:03.850245 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536e3284-6a59-4693-959a-966e4803b7db-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cg6v4\" (UID: \"536e3284-6a59-4693-959a-966e4803b7db\") " pod="kube-system/cilium-operator-6c4d7847fc-cg6v4" Sep 8 23:58:04.046392 kubelet[2635]: E0908 23:58:04.046238 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.047006 containerd[1481]: time="2025-09-08T23:58:04.046975353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2wkb5,Uid:273418a4-2f3c-48a3-8658-e3ed0200cad0,Namespace:kube-system,Attempt:0,}" Sep 8 23:58:04.059116 kubelet[2635]: E0908 23:58:04.059042 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.061637 containerd[1481]: time="2025-09-08T23:58:04.059521349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gwn48,Uid:d2ee2c01-0f71-4999-820c-16f2c0d07c14,Namespace:kube-system,Attempt:0,}" Sep 8 23:58:04.067894 kubelet[2635]: E0908 23:58:04.067839 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.068927 containerd[1481]: time="2025-09-08T23:58:04.068854148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cg6v4,Uid:536e3284-6a59-4693-959a-966e4803b7db,Namespace:kube-system,Attempt:0,}" Sep 8 23:58:04.081083 containerd[1481]: time="2025-09-08T23:58:04.080956988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:58:04.081083 containerd[1481]: time="2025-09-08T23:58:04.081047529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:58:04.081083 containerd[1481]: time="2025-09-08T23:58:04.081059492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.081293 containerd[1481]: time="2025-09-08T23:58:04.081157567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.100355 containerd[1481]: time="2025-09-08T23:58:04.100241837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:58:04.100689 containerd[1481]: time="2025-09-08T23:58:04.100378565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:58:04.100689 containerd[1481]: time="2025-09-08T23:58:04.100458936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.100749 containerd[1481]: time="2025-09-08T23:58:04.100645940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.112752 systemd[1]: Started cri-containerd-17a6a837404e0eb82649e6fb857bdf4dfb79580d2890ff3ae4cdcdd31119cf96.scope - libcontainer container 17a6a837404e0eb82649e6fb857bdf4dfb79580d2890ff3ae4cdcdd31119cf96. Sep 8 23:58:04.113239 containerd[1481]: time="2025-09-08T23:58:04.113066309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:58:04.113239 containerd[1481]: time="2025-09-08T23:58:04.113196985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:58:04.113239 containerd[1481]: time="2025-09-08T23:58:04.113225630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.113784 containerd[1481]: time="2025-09-08T23:58:04.113393196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:04.127695 systemd[1]: Started cri-containerd-f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c.scope - libcontainer container f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c. Sep 8 23:58:04.135768 systemd[1]: Started cri-containerd-c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1.scope - libcontainer container c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1. Sep 8 23:58:04.153736 containerd[1481]: time="2025-09-08T23:58:04.153663955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2wkb5,Uid:273418a4-2f3c-48a3-8658-e3ed0200cad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"17a6a837404e0eb82649e6fb857bdf4dfb79580d2890ff3ae4cdcdd31119cf96\"" Sep 8 23:58:04.154921 kubelet[2635]: E0908 23:58:04.154891 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.164200 containerd[1481]: time="2025-09-08T23:58:04.164111058Z" level=info msg="CreateContainer within sandbox \"17a6a837404e0eb82649e6fb857bdf4dfb79580d2890ff3ae4cdcdd31119cf96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:58:04.168775 containerd[1481]: time="2025-09-08T23:58:04.168702316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gwn48,Uid:d2ee2c01-0f71-4999-820c-16f2c0d07c14,Namespace:kube-system,Attempt:0,} returns sandbox id \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\"" Sep 8 23:58:04.169865 kubelet[2635]: E0908 23:58:04.169840 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.170890 containerd[1481]: time="2025-09-08T23:58:04.170816838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:58:04.190612 containerd[1481]: time="2025-09-08T23:58:04.190529975Z" level=info msg="CreateContainer within sandbox \"17a6a837404e0eb82649e6fb857bdf4dfb79580d2890ff3ae4cdcdd31119cf96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8db569bd5f6772877467035bd344d27400837d377ce427e852022f2911dc220\"" Sep 8 23:58:04.193163 containerd[1481]: time="2025-09-08T23:58:04.191194199Z" level=info msg="StartContainer for \"e8db569bd5f6772877467035bd344d27400837d377ce427e852022f2911dc220\"" Sep 8 23:58:04.214272 containerd[1481]: time="2025-09-08T23:58:04.214220642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cg6v4,Uid:536e3284-6a59-4693-959a-966e4803b7db,Namespace:kube-system,Attempt:0,} returns sandbox id \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\"" Sep 8 23:58:04.215043 kubelet[2635]: E0908 23:58:04.215016 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.232734 systemd[1]: Started cri-containerd-e8db569bd5f6772877467035bd344d27400837d377ce427e852022f2911dc220.scope - libcontainer container e8db569bd5f6772877467035bd344d27400837d377ce427e852022f2911dc220. Sep 8 23:58:04.268682 containerd[1481]: time="2025-09-08T23:58:04.268634392Z" level=info msg="StartContainer for \"e8db569bd5f6772877467035bd344d27400837d377ce427e852022f2911dc220\" returns successfully" Sep 8 23:58:04.659789 kubelet[2635]: E0908 23:58:04.659633 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:04.795029 kubelet[2635]: I0908 23:58:04.794955 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2wkb5" podStartSLOduration=1.794935476 podStartE2EDuration="1.794935476s" podCreationTimestamp="2025-09-08 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:04.791925403 +0000 UTC m=+6.249750387" watchObservedRunningTime="2025-09-08 23:58:04.794935476 +0000 UTC m=+6.252760450" Sep 8 23:58:06.999322 kubelet[2635]: E0908 23:58:06.999276 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:07.663941 kubelet[2635]: E0908 23:58:07.663902 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:08.665946 kubelet[2635]: E0908 23:58:08.665910 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:11.177854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014835686.mount: Deactivated successfully. Sep 8 23:58:16.884653 containerd[1481]: time="2025-09-08T23:58:16.884463633Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:58:16.885633 containerd[1481]: time="2025-09-08T23:58:16.885160123Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 8 23:58:16.888918 containerd[1481]: time="2025-09-08T23:58:16.888867889Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:58:16.891479 containerd[1481]: time="2025-09-08T23:58:16.891439177Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.720584248s" Sep 8 23:58:16.891613 containerd[1481]: time="2025-09-08T23:58:16.891485774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 8 23:58:16.892728 containerd[1481]: time="2025-09-08T23:58:16.892684339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:58:16.897930 containerd[1481]: time="2025-09-08T23:58:16.897871438Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:58:16.911224 containerd[1481]: time="2025-09-08T23:58:16.911170057Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\"" Sep 8 23:58:16.911859 containerd[1481]: time="2025-09-08T23:58:16.911814410Z" level=info msg="StartContainer for \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\"" Sep 8 23:58:16.950787 systemd[1]: Started cri-containerd-7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2.scope - libcontainer container 7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2. Sep 8 23:58:16.981100 containerd[1481]: time="2025-09-08T23:58:16.981050758Z" level=info msg="StartContainer for \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\" returns successfully" Sep 8 23:58:16.996199 systemd[1]: cri-containerd-7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2.scope: Deactivated successfully. Sep 8 23:58:17.037104 kubelet[2635]: E0908 23:58:17.036692 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:17.909078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2-rootfs.mount: Deactivated successfully. Sep 8 23:58:18.038480 kubelet[2635]: E0908 23:58:18.038431 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:18.553906 containerd[1481]: time="2025-09-08T23:58:18.553802093Z" level=info msg="shim disconnected" id=7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2 namespace=k8s.io Sep 8 23:58:18.553906 containerd[1481]: time="2025-09-08T23:58:18.553887684Z" level=warning msg="cleaning up after shim disconnected" id=7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2 namespace=k8s.io Sep 8 23:58:18.553906 containerd[1481]: time="2025-09-08T23:58:18.553901810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:58:19.057838 kubelet[2635]: E0908 23:58:19.056843 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:19.083416 containerd[1481]: time="2025-09-08T23:58:19.082118405Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:58:19.161713 containerd[1481]: time="2025-09-08T23:58:19.161629103Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\"" Sep 8 23:58:19.162437 containerd[1481]: time="2025-09-08T23:58:19.162237688Z" level=info msg="StartContainer for \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\"" Sep 8 23:58:19.204933 systemd[1]: Started cri-containerd-aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7.scope - libcontainer container aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7. Sep 8 23:58:19.235356 containerd[1481]: time="2025-09-08T23:58:19.235292227Z" level=info msg="StartContainer for \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\" returns successfully" Sep 8 23:58:19.251266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:58:19.251887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:58:19.252638 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:58:19.258938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:58:19.259241 systemd[1]: cri-containerd-aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7.scope: Deactivated successfully. Sep 8 23:58:19.286608 containerd[1481]: time="2025-09-08T23:58:19.286468737Z" level=info msg="shim disconnected" id=aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7 namespace=k8s.io Sep 8 23:58:19.286608 containerd[1481]: time="2025-09-08T23:58:19.286520394Z" level=warning msg="cleaning up after shim disconnected" id=aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7 namespace=k8s.io Sep 8 23:58:19.286814 containerd[1481]: time="2025-09-08T23:58:19.286529190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:58:19.288192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:58:20.062318 kubelet[2635]: E0908 23:58:20.062263 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:20.074120 containerd[1481]: time="2025-09-08T23:58:20.074022268Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:58:20.097219 containerd[1481]: time="2025-09-08T23:58:20.097172181Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\"" Sep 8 23:58:20.098261 containerd[1481]: time="2025-09-08T23:58:20.098237193Z" level=info msg="StartContainer for \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\"" Sep 8 23:58:20.140892 systemd[1]: Started cri-containerd-874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f.scope - libcontainer container 874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f. Sep 8 23:58:20.151426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7-rootfs.mount: Deactivated successfully. Sep 8 23:58:20.211435 systemd[1]: cri-containerd-874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f.scope: Deactivated successfully. Sep 8 23:58:20.216126 containerd[1481]: time="2025-09-08T23:58:20.216007851Z" level=info msg="StartContainer for \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\" returns successfully" Sep 8 23:58:20.285933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f-rootfs.mount: Deactivated successfully. Sep 8 23:58:20.785628 containerd[1481]: time="2025-09-08T23:58:20.785560487Z" level=info msg="shim disconnected" id=874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f namespace=k8s.io Sep 8 23:58:20.788908 containerd[1481]: time="2025-09-08T23:58:20.785917659Z" level=warning msg="cleaning up after shim disconnected" id=874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f namespace=k8s.io Sep 8 23:58:20.788908 containerd[1481]: time="2025-09-08T23:58:20.785937877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:58:20.911356 containerd[1481]: time="2025-09-08T23:58:20.908710397Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:58:20.911356 containerd[1481]: time="2025-09-08T23:58:20.909715046Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 8 23:58:20.913715 containerd[1481]: time="2025-09-08T23:58:20.913653441Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:58:20.917734 containerd[1481]: time="2025-09-08T23:58:20.917465908Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.024743997s" Sep 8 23:58:20.917734 containerd[1481]: time="2025-09-08T23:58:20.917518748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 8 23:58:20.939505 containerd[1481]: time="2025-09-08T23:58:20.939342057Z" level=info msg="CreateContainer within sandbox \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:58:20.966682 containerd[1481]: time="2025-09-08T23:58:20.966596317Z" level=info msg="CreateContainer within sandbox \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\"" Sep 8 23:58:20.968559 containerd[1481]: time="2025-09-08T23:58:20.968467394Z" level=info msg="StartContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\"" Sep 8 23:58:21.042975 systemd[1]: Started cri-containerd-ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583.scope - libcontainer container ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583. Sep 8 23:58:21.097193 kubelet[2635]: E0908 23:58:21.096748 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:21.181312 containerd[1481]: time="2025-09-08T23:58:21.178828659Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:58:21.209849 containerd[1481]: time="2025-09-08T23:58:21.207755116Z" level=info msg="StartContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" returns successfully" Sep 8 23:58:21.403713 containerd[1481]: time="2025-09-08T23:58:21.400110675Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\"" Sep 8 23:58:21.403713 containerd[1481]: time="2025-09-08T23:58:21.402171770Z" level=info msg="StartContainer for \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\"" Sep 8 23:58:21.458905 systemd[1]: Started cri-containerd-77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c.scope - libcontainer container 77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c. Sep 8 23:58:21.533731 systemd[1]: cri-containerd-77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c.scope: Deactivated successfully. Sep 8 23:58:21.537079 containerd[1481]: time="2025-09-08T23:58:21.537009647Z" level=info msg="StartContainer for \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\" returns successfully" Sep 8 23:58:21.762187 containerd[1481]: time="2025-09-08T23:58:21.762086926Z" level=info msg="shim disconnected" id=77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c namespace=k8s.io Sep 8 23:58:21.762187 containerd[1481]: time="2025-09-08T23:58:21.762176695Z" level=warning msg="cleaning up after shim disconnected" id=77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c namespace=k8s.io Sep 8 23:58:21.762187 containerd[1481]: time="2025-09-08T23:58:21.762189830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:58:22.108334 kubelet[2635]: E0908 23:58:22.104035 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:22.117159 kubelet[2635]: E0908 23:58:22.117000 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:22.151177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c-rootfs.mount: Deactivated successfully. Sep 8 23:58:22.332721 containerd[1481]: time="2025-09-08T23:58:22.332642769Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:58:22.531779 containerd[1481]: time="2025-09-08T23:58:22.531632809Z" level=info msg="CreateContainer within sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\"" Sep 8 23:58:22.535151 containerd[1481]: time="2025-09-08T23:58:22.533564299Z" level=info msg="StartContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\"" Sep 8 23:58:22.589872 kubelet[2635]: I0908 23:58:22.585602 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cg6v4" podStartSLOduration=2.874763548 podStartE2EDuration="19.585580526s" podCreationTimestamp="2025-09-08 23:58:03 +0000 UTC" firstStartedPulling="2025-09-08 23:58:04.215854707 +0000 UTC m=+5.673679681" lastFinishedPulling="2025-09-08 23:58:20.926671675 +0000 UTC m=+22.384496659" observedRunningTime="2025-09-08 23:58:22.442671212 +0000 UTC m=+23.900496186" watchObservedRunningTime="2025-09-08 23:58:22.585580526 +0000 UTC m=+24.043405520" Sep 8 23:58:22.683884 systemd[1]: Started cri-containerd-9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4.scope - libcontainer container 9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4. Sep 8 23:58:22.798097 containerd[1481]: time="2025-09-08T23:58:22.794043671Z" level=info msg="StartContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" returns successfully" Sep 8 23:58:23.123461 kubelet[2635]: E0908 23:58:23.122059 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:23.123461 kubelet[2635]: E0908 23:58:23.123129 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:23.141068 kubelet[2635]: I0908 23:58:23.141031 2635 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:58:23.192288 kubelet[2635]: I0908 23:58:23.190926 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gwn48" podStartSLOduration=7.468956257 podStartE2EDuration="20.190896527s" podCreationTimestamp="2025-09-08 23:58:03 +0000 UTC" firstStartedPulling="2025-09-08 23:58:04.170567347 +0000 UTC m=+5.628392321" lastFinishedPulling="2025-09-08 23:58:16.892507617 +0000 UTC m=+18.350332591" observedRunningTime="2025-09-08 23:58:23.169007941 +0000 UTC m=+24.626832915" watchObservedRunningTime="2025-09-08 23:58:23.190896527 +0000 UTC m=+24.648721511" Sep 8 23:58:23.288350 systemd[1]: Created slice kubepods-burstable-pod53c9bfcf_0702_40e2_93f8_0fc656e5ba6f.slice - libcontainer container kubepods-burstable-pod53c9bfcf_0702_40e2_93f8_0fc656e5ba6f.slice. Sep 8 23:58:23.306062 systemd[1]: Created slice kubepods-burstable-pode99fd630_4804_4f62_8072_c79231945bfa.slice - libcontainer container kubepods-burstable-pode99fd630_4804_4f62_8072_c79231945bfa.slice. Sep 8 23:58:23.312896 kubelet[2635]: I0908 23:58:23.312172 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqfc\" (UniqueName: \"kubernetes.io/projected/53c9bfcf-0702-40e2-93f8-0fc656e5ba6f-kube-api-access-cwqfc\") pod \"coredns-674b8bbfcf-77mqr\" (UID: \"53c9bfcf-0702-40e2-93f8-0fc656e5ba6f\") " pod="kube-system/coredns-674b8bbfcf-77mqr" Sep 8 23:58:23.312896 kubelet[2635]: I0908 23:58:23.312279 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c9bfcf-0702-40e2-93f8-0fc656e5ba6f-config-volume\") pod \"coredns-674b8bbfcf-77mqr\" (UID: \"53c9bfcf-0702-40e2-93f8-0fc656e5ba6f\") " pod="kube-system/coredns-674b8bbfcf-77mqr" Sep 8 23:58:23.312896 kubelet[2635]: I0908 23:58:23.312342 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vvz\" (UniqueName: \"kubernetes.io/projected/e99fd630-4804-4f62-8072-c79231945bfa-kube-api-access-k4vvz\") pod \"coredns-674b8bbfcf-56p59\" (UID: \"e99fd630-4804-4f62-8072-c79231945bfa\") " pod="kube-system/coredns-674b8bbfcf-56p59" Sep 8 23:58:23.312896 kubelet[2635]: I0908 23:58:23.312390 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e99fd630-4804-4f62-8072-c79231945bfa-config-volume\") pod \"coredns-674b8bbfcf-56p59\" (UID: \"e99fd630-4804-4f62-8072-c79231945bfa\") " pod="kube-system/coredns-674b8bbfcf-56p59" Sep 8 23:58:23.596779 kubelet[2635]: E0908 23:58:23.596735 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:23.598953 containerd[1481]: time="2025-09-08T23:58:23.598880000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-77mqr,Uid:53c9bfcf-0702-40e2-93f8-0fc656e5ba6f,Namespace:kube-system,Attempt:0,}" Sep 8 23:58:23.614318 kubelet[2635]: E0908 23:58:23.613440 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:23.620405 containerd[1481]: time="2025-09-08T23:58:23.620326264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-56p59,Uid:e99fd630-4804-4f62-8072-c79231945bfa,Namespace:kube-system,Attempt:0,}" Sep 8 23:58:24.131562 kubelet[2635]: E0908 23:58:24.131281 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:25.788013 systemd-networkd[1396]: cilium_host: Link UP Sep 8 23:58:25.792176 systemd-networkd[1396]: cilium_net: Link UP Sep 8 23:58:25.792566 systemd-networkd[1396]: cilium_net: Gained carrier Sep 8 23:58:25.792866 systemd-networkd[1396]: cilium_host: Gained carrier Sep 8 23:58:26.057659 systemd-networkd[1396]: cilium_vxlan: Link UP Sep 8 23:58:26.057672 systemd-networkd[1396]: cilium_vxlan: Gained carrier Sep 8 23:58:26.063094 systemd-networkd[1396]: cilium_host: Gained IPv6LL Sep 8 23:58:26.530992 kernel: NET: Registered PF_ALG protocol family Sep 8 23:58:26.758837 systemd-networkd[1396]: cilium_net: Gained IPv6LL Sep 8 23:58:27.912232 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Sep 8 23:58:28.027693 systemd-networkd[1396]: lxc_health: Link UP Sep 8 23:58:28.028239 systemd-networkd[1396]: lxc_health: Gained carrier Sep 8 23:58:28.062458 kubelet[2635]: E0908 23:58:28.062142 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:28.137074 kubelet[2635]: E0908 23:58:28.137021 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:28.245784 kernel: eth0: renamed from tmp65dee Sep 8 23:58:28.251924 systemd-networkd[1396]: lxc76426e5c08e6: Link UP Sep 8 23:58:28.256397 systemd-networkd[1396]: lxc76426e5c08e6: Gained carrier Sep 8 23:58:28.276576 kernel: eth0: renamed from tmp476a0 Sep 8 23:58:28.283429 systemd-networkd[1396]: lxc3d2d3a8a9aaa: Link UP Sep 8 23:58:28.285254 systemd-networkd[1396]: lxc3d2d3a8a9aaa: Gained carrier Sep 8 23:58:29.139332 kubelet[2635]: E0908 23:58:29.139282 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:29.705630 systemd-networkd[1396]: lxc_health: Gained IPv6LL Sep 8 23:58:29.894751 systemd-networkd[1396]: lxc76426e5c08e6: Gained IPv6LL Sep 8 23:58:29.959888 systemd-networkd[1396]: lxc3d2d3a8a9aaa: Gained IPv6LL Sep 8 23:58:30.380010 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Sep 8 23:58:30.425906 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:30.428032 sshd-session[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:30.432879 systemd-logind[1467]: New session 10 of user core. Sep 8 23:58:30.442843 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:58:30.674315 sshd[3856]: Connection closed by 10.0.0.1 port 55412 Sep 8 23:58:30.676648 sshd-session[3854]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:30.682233 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:55412.service: Deactivated successfully. Sep 8 23:58:30.685983 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:58:30.687247 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:58:30.688273 systemd-logind[1467]: Removed session 10. Sep 8 23:58:32.342131 containerd[1481]: time="2025-09-08T23:58:32.341986745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:58:32.342131 containerd[1481]: time="2025-09-08T23:58:32.342068308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:58:32.342131 containerd[1481]: time="2025-09-08T23:58:32.342080891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:32.342658 containerd[1481]: time="2025-09-08T23:58:32.342180268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:32.366794 systemd[1]: Started cri-containerd-65dee1ff549f239ee6885882c50f13c903bcf4705d0d6b930639b44a4d3346df.scope - libcontainer container 65dee1ff549f239ee6885882c50f13c903bcf4705d0d6b930639b44a4d3346df. Sep 8 23:58:32.380972 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:58:32.408350 containerd[1481]: time="2025-09-08T23:58:32.408303118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-77mqr,Uid:53c9bfcf-0702-40e2-93f8-0fc656e5ba6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"65dee1ff549f239ee6885882c50f13c903bcf4705d0d6b930639b44a4d3346df\"" Sep 8 23:58:32.409263 kubelet[2635]: E0908 23:58:32.409230 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:32.439431 containerd[1481]: time="2025-09-08T23:58:32.437910333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:58:32.439431 containerd[1481]: time="2025-09-08T23:58:32.439176059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:58:32.439431 containerd[1481]: time="2025-09-08T23:58:32.439191558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:32.439431 containerd[1481]: time="2025-09-08T23:58:32.439288049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:58:32.456396 systemd[1]: run-containerd-runc-k8s.io-476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909-runc.piIZ5e.mount: Deactivated successfully. Sep 8 23:58:32.471738 systemd[1]: Started cri-containerd-476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909.scope - libcontainer container 476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909. Sep 8 23:58:32.484857 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:58:32.511010 containerd[1481]: time="2025-09-08T23:58:32.510950932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-56p59,Uid:e99fd630-4804-4f62-8072-c79231945bfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909\"" Sep 8 23:58:32.511667 kubelet[2635]: E0908 23:58:32.511642 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:32.745484 containerd[1481]: time="2025-09-08T23:58:32.745434267Z" level=info msg="CreateContainer within sandbox \"65dee1ff549f239ee6885882c50f13c903bcf4705d0d6b930639b44a4d3346df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:58:32.915882 containerd[1481]: time="2025-09-08T23:58:32.915824060Z" level=info msg="CreateContainer within sandbox \"476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:58:33.328825 containerd[1481]: time="2025-09-08T23:58:33.328737145Z" level=info msg="CreateContainer within sandbox \"65dee1ff549f239ee6885882c50f13c903bcf4705d0d6b930639b44a4d3346df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"876c2e619256814e22a0e683e870134baa98dd33a528fac314a1fb86397cea2b\"" Sep 8 23:58:33.329611 containerd[1481]: time="2025-09-08T23:58:33.329502482Z" level=info msg="StartContainer for \"876c2e619256814e22a0e683e870134baa98dd33a528fac314a1fb86397cea2b\"" Sep 8 23:58:33.352296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348239658.mount: Deactivated successfully. Sep 8 23:58:33.365689 systemd[1]: Started cri-containerd-876c2e619256814e22a0e683e870134baa98dd33a528fac314a1fb86397cea2b.scope - libcontainer container 876c2e619256814e22a0e683e870134baa98dd33a528fac314a1fb86397cea2b. Sep 8 23:58:33.496805 containerd[1481]: time="2025-09-08T23:58:33.496735909Z" level=info msg="StartContainer for \"876c2e619256814e22a0e683e870134baa98dd33a528fac314a1fb86397cea2b\" returns successfully" Sep 8 23:58:33.668429 containerd[1481]: time="2025-09-08T23:58:33.668387641Z" level=info msg="CreateContainer within sandbox \"476a068eba7bccc74759aaa70b6bfc0703f69838552b133c1ddfa4e58e879909\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b31726b741466c510bd11366fe0f9daf1f11422a32bd3b0bd18a7d410bc8f61\"" Sep 8 23:58:33.669283 containerd[1481]: time="2025-09-08T23:58:33.669082245Z" level=info msg="StartContainer for \"6b31726b741466c510bd11366fe0f9daf1f11422a32bd3b0bd18a7d410bc8f61\"" Sep 8 23:58:33.705730 systemd[1]: Started cri-containerd-6b31726b741466c510bd11366fe0f9daf1f11422a32bd3b0bd18a7d410bc8f61.scope - libcontainer container 6b31726b741466c510bd11366fe0f9daf1f11422a32bd3b0bd18a7d410bc8f61. Sep 8 23:58:33.796075 containerd[1481]: time="2025-09-08T23:58:33.796020053Z" level=info msg="StartContainer for \"6b31726b741466c510bd11366fe0f9daf1f11422a32bd3b0bd18a7d410bc8f61\" returns successfully" Sep 8 23:58:34.170589 kubelet[2635]: E0908 23:58:34.170496 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:34.173032 kubelet[2635]: E0908 23:58:34.172994 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:34.305290 kubelet[2635]: I0908 23:58:34.304960 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-56p59" podStartSLOduration=31.304931908 podStartE2EDuration="31.304931908s" podCreationTimestamp="2025-09-08 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:34.304069859 +0000 UTC m=+35.761894833" watchObservedRunningTime="2025-09-08 23:58:34.304931908 +0000 UTC m=+35.762756882" Sep 8 23:58:34.371282 kubelet[2635]: I0908 23:58:34.371208 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-77mqr" podStartSLOduration=31.371182732 podStartE2EDuration="31.371182732s" podCreationTimestamp="2025-09-08 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:58:34.369557571 +0000 UTC m=+35.827382575" watchObservedRunningTime="2025-09-08 23:58:34.371182732 +0000 UTC m=+35.829007726" Sep 8 23:58:35.174935 kubelet[2635]: E0908 23:58:35.174879 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:35.175404 kubelet[2635]: E0908 23:58:35.174973 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:35.688662 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:55418.service - OpenSSH per-connection server daemon (10.0.0.1:55418). Sep 8 23:58:35.731657 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 55418 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:35.733352 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:35.737982 systemd-logind[1467]: New session 11 of user core. Sep 8 23:58:35.747791 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:58:35.892847 sshd[4055]: Connection closed by 10.0.0.1 port 55418 Sep 8 23:58:35.893219 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:35.897203 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:55418.service: Deactivated successfully. Sep 8 23:58:35.899303 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:58:35.900111 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:58:35.901157 systemd-logind[1467]: Removed session 11. Sep 8 23:58:36.176724 kubelet[2635]: E0908 23:58:36.176675 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:36.177262 kubelet[2635]: E0908 23:58:36.176795 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:58:40.906597 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:54084.service - OpenSSH per-connection server daemon (10.0.0.1:54084). Sep 8 23:58:40.950340 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 54084 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:40.952010 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:40.956587 systemd-logind[1467]: New session 12 of user core. Sep 8 23:58:40.968734 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:58:41.193792 sshd[4072]: Connection closed by 10.0.0.1 port 54084 Sep 8 23:58:41.194204 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:41.197458 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:54084.service: Deactivated successfully. Sep 8 23:58:41.200320 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:58:41.202263 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:58:41.203391 systemd-logind[1467]: Removed session 12. Sep 8 23:58:46.215900 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:54094.service - OpenSSH per-connection server daemon (10.0.0.1:54094). Sep 8 23:58:46.252562 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 54094 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:46.254440 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:46.259365 systemd-logind[1467]: New session 13 of user core. Sep 8 23:58:46.268787 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:58:46.401036 sshd[4088]: Connection closed by 10.0.0.1 port 54094 Sep 8 23:58:46.401456 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:46.406173 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:54094.service: Deactivated successfully. Sep 8 23:58:46.408527 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:58:46.409631 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:58:46.411000 systemd-logind[1467]: Removed session 13. Sep 8 23:58:51.417648 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Sep 8 23:58:51.457205 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:51.458675 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:51.462659 systemd-logind[1467]: New session 14 of user core. Sep 8 23:58:51.471697 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:58:51.584915 sshd[4106]: Connection closed by 10.0.0.1 port 45380 Sep 8 23:58:51.585337 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:51.595176 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:45380.service: Deactivated successfully. Sep 8 23:58:51.597727 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:58:51.599526 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:58:51.607958 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:45384.service - OpenSSH per-connection server daemon (10.0.0.1:45384). Sep 8 23:58:51.609264 systemd-logind[1467]: Removed session 14. Sep 8 23:58:51.642181 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:51.643745 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:51.648130 systemd-logind[1467]: New session 15 of user core. Sep 8 23:58:51.657688 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:58:51.839000 sshd[4122]: Connection closed by 10.0.0.1 port 45384 Sep 8 23:58:51.841469 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:51.861049 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). Sep 8 23:58:51.862165 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:45384.service: Deactivated successfully. Sep 8 23:58:51.866322 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:58:51.868290 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:58:51.871204 systemd-logind[1467]: Removed session 15. Sep 8 23:58:51.903495 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:51.905332 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:51.910147 systemd-logind[1467]: New session 16 of user core. Sep 8 23:58:51.919663 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:58:52.050125 sshd[4136]: Connection closed by 10.0.0.1 port 45396 Sep 8 23:58:52.050522 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:52.055358 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:45396.service: Deactivated successfully. Sep 8 23:58:52.058033 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:58:52.058890 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:58:52.059979 systemd-logind[1467]: Removed session 16. Sep 8 23:58:57.066666 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Sep 8 23:58:57.106207 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:58:57.108727 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:57.114189 systemd-logind[1467]: New session 17 of user core. Sep 8 23:58:57.130717 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:58:57.257280 sshd[4152]: Connection closed by 10.0.0.1 port 45412 Sep 8 23:58:57.257581 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:57.262715 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:45412.service: Deactivated successfully. Sep 8 23:58:57.265766 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:58:57.266617 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:58:57.268074 systemd-logind[1467]: Removed session 17. Sep 8 23:59:02.271064 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:37076.service - OpenSSH per-connection server daemon (10.0.0.1:37076). Sep 8 23:59:02.309740 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 37076 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:02.311364 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:02.315771 systemd-logind[1467]: New session 18 of user core. Sep 8 23:59:02.330975 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:59:02.471033 sshd[4169]: Connection closed by 10.0.0.1 port 37076 Sep 8 23:59:02.471639 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:02.489998 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:37076.service: Deactivated successfully. Sep 8 23:59:02.492814 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:59:02.495803 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:59:02.507485 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:37078.service - OpenSSH per-connection server daemon (10.0.0.1:37078). Sep 8 23:59:02.509413 systemd-logind[1467]: Removed session 18. Sep 8 23:59:02.552414 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 37078 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:02.554853 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:02.562130 systemd-logind[1467]: New session 19 of user core. Sep 8 23:59:02.576788 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:59:02.985427 sshd[4185]: Connection closed by 10.0.0.1 port 37078 Sep 8 23:59:02.986615 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:03.000442 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:37078.service: Deactivated successfully. Sep 8 23:59:03.004971 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:59:03.007712 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:59:03.020248 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:37092.service - OpenSSH per-connection server daemon (10.0.0.1:37092). Sep 8 23:59:03.021794 systemd-logind[1467]: Removed session 19. Sep 8 23:59:03.092783 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 37092 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:03.095279 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:03.102354 systemd-logind[1467]: New session 20 of user core. Sep 8 23:59:03.115861 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:59:04.264959 sshd[4198]: Connection closed by 10.0.0.1 port 37092 Sep 8 23:59:04.266189 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:04.284936 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:37092.service: Deactivated successfully. Sep 8 23:59:04.288912 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:59:04.292903 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:59:04.303438 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:37100.service - OpenSSH per-connection server daemon (10.0.0.1:37100). Sep 8 23:59:04.306178 systemd-logind[1467]: Removed session 20. Sep 8 23:59:04.374440 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 37100 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:04.377689 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:04.397023 systemd-logind[1467]: New session 21 of user core. Sep 8 23:59:04.408017 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:59:04.976413 sshd[4220]: Connection closed by 10.0.0.1 port 37100 Sep 8 23:59:04.978785 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:04.995789 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:37100.service: Deactivated successfully. Sep 8 23:59:05.001469 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:59:05.004401 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:59:05.022491 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:37114.service - OpenSSH per-connection server daemon (10.0.0.1:37114). Sep 8 23:59:05.023520 systemd-logind[1467]: Removed session 21. Sep 8 23:59:05.093310 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 37114 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:05.096012 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:05.107965 systemd-logind[1467]: New session 22 of user core. Sep 8 23:59:05.127901 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:59:05.296015 sshd[4236]: Connection closed by 10.0.0.1 port 37114 Sep 8 23:59:05.296856 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:05.308369 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:37114.service: Deactivated successfully. Sep 8 23:59:05.314661 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:59:05.316158 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:59:05.317803 systemd-logind[1467]: Removed session 22. Sep 8 23:59:10.336151 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:55370.service - OpenSSH per-connection server daemon (10.0.0.1:55370). Sep 8 23:59:10.419318 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 55370 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:10.421819 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:10.432410 systemd-logind[1467]: New session 23 of user core. Sep 8 23:59:10.443934 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:59:10.668195 sshd[4251]: Connection closed by 10.0.0.1 port 55370 Sep 8 23:59:10.668868 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:10.678115 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:55370.service: Deactivated successfully. Sep 8 23:59:10.682432 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:59:10.686347 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:59:10.694207 systemd-logind[1467]: Removed session 23. Sep 8 23:59:15.714244 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:55376.service - OpenSSH per-connection server daemon (10.0.0.1:55376). Sep 8 23:59:15.777264 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 55376 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:15.779663 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:15.791611 systemd-logind[1467]: New session 24 of user core. Sep 8 23:59:15.806912 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:59:16.050276 sshd[4266]: Connection closed by 10.0.0.1 port 55376 Sep 8 23:59:16.050415 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:16.056856 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:55376.service: Deactivated successfully. Sep 8 23:59:16.061066 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:59:16.079103 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:59:16.093353 systemd-logind[1467]: Removed session 24. Sep 8 23:59:17.641149 kubelet[2635]: E0908 23:59:17.640188 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:21.069410 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). Sep 8 23:59:21.148428 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:21.151150 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:21.174751 systemd-logind[1467]: New session 25 of user core. Sep 8 23:59:21.188958 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:59:21.376764 sshd[4284]: Connection closed by 10.0.0.1 port 49424 Sep 8 23:59:21.377204 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:21.383724 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:49424.service: Deactivated successfully. Sep 8 23:59:21.386763 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:59:21.387761 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:59:21.389425 systemd-logind[1467]: Removed session 25. Sep 8 23:59:22.974017 kernel: hrtimer: interrupt took 4823343 ns Sep 8 23:59:26.417060 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:49436.service - OpenSSH per-connection server daemon (10.0.0.1:49436). Sep 8 23:59:26.509612 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 49436 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:26.513512 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:26.531265 systemd-logind[1467]: New session 26 of user core. Sep 8 23:59:26.556153 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:59:26.644417 kubelet[2635]: E0908 23:59:26.644343 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:26.767171 sshd[4299]: Connection closed by 10.0.0.1 port 49436 Sep 8 23:59:26.768508 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:26.796824 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:49436.service: Deactivated successfully. Sep 8 23:59:26.807376 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:59:26.814189 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:59:26.829936 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:49438.service - OpenSSH per-connection server daemon (10.0.0.1:49438). Sep 8 23:59:26.832710 systemd-logind[1467]: Removed session 26. Sep 8 23:59:26.891421 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:26.897148 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:26.905824 systemd-logind[1467]: New session 27 of user core. Sep 8 23:59:26.917959 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 8 23:59:28.643234 kubelet[2635]: E0908 23:59:28.643191 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:29.573045 containerd[1481]: time="2025-09-08T23:59:29.572694112Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:59:29.578767 containerd[1481]: time="2025-09-08T23:59:29.576948740Z" level=info msg="StopContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" with timeout 2 (s)" Sep 8 23:59:29.578767 containerd[1481]: time="2025-09-08T23:59:29.577507680Z" level=info msg="Stop container \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" with signal terminated" Sep 8 23:59:29.602639 systemd-networkd[1396]: lxc_health: Link DOWN Sep 8 23:59:29.602653 systemd-networkd[1396]: lxc_health: Lost carrier Sep 8 23:59:29.653414 systemd[1]: cri-containerd-9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4.scope: Deactivated successfully. Sep 8 23:59:29.653928 systemd[1]: cri-containerd-9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4.scope: Consumed 9.330s CPU time, 125.7M memory peak, 472K read from disk, 13.3M written to disk. Sep 8 23:59:29.692932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4-rootfs.mount: Deactivated successfully. Sep 8 23:59:29.710888 containerd[1481]: time="2025-09-08T23:59:29.709436409Z" level=info msg="StopContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" with timeout 30 (s)" Sep 8 23:59:29.716188 containerd[1481]: time="2025-09-08T23:59:29.713129512Z" level=info msg="Stop container \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" with signal terminated" Sep 8 23:59:29.730928 containerd[1481]: time="2025-09-08T23:59:29.730837531Z" level=info msg="shim disconnected" id=9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4 namespace=k8s.io Sep 8 23:59:29.730928 containerd[1481]: time="2025-09-08T23:59:29.730922082Z" level=warning msg="cleaning up after shim disconnected" id=9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4 namespace=k8s.io Sep 8 23:59:29.730928 containerd[1481]: time="2025-09-08T23:59:29.730937280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:29.749092 systemd[1]: cri-containerd-ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583.scope: Deactivated successfully. Sep 8 23:59:29.775681 containerd[1481]: time="2025-09-08T23:59:29.775588230Z" level=warning msg="cleanup warnings time=\"2025-09-08T23:59:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 8 23:59:29.789187 containerd[1481]: time="2025-09-08T23:59:29.789113407Z" level=info msg="StopContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" returns successfully" Sep 8 23:59:29.790373 containerd[1481]: time="2025-09-08T23:59:29.790328761Z" level=info msg="StopPodSandbox for \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\"" Sep 8 23:59:29.799810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583-rootfs.mount: Deactivated successfully. Sep 8 23:59:29.811446 containerd[1481]: time="2025-09-08T23:59:29.811375060Z" level=info msg="shim disconnected" id=ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583 namespace=k8s.io Sep 8 23:59:29.811983 containerd[1481]: time="2025-09-08T23:59:29.811750563Z" level=warning msg="cleaning up after shim disconnected" id=ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583 namespace=k8s.io Sep 8 23:59:29.811983 containerd[1481]: time="2025-09-08T23:59:29.811773926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:29.831646 containerd[1481]: time="2025-09-08T23:59:29.790379537Z" level=info msg="Container to stop \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.831646 containerd[1481]: time="2025-09-08T23:59:29.829008919Z" level=info msg="Container to stop \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.831646 containerd[1481]: time="2025-09-08T23:59:29.829027854Z" level=info msg="Container to stop \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.831646 containerd[1481]: time="2025-09-08T23:59:29.829040578Z" level=info msg="Container to stop \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.831646 containerd[1481]: time="2025-09-08T23:59:29.829059755Z" level=info msg="Container to stop \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.832668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c-shm.mount: Deactivated successfully. Sep 8 23:59:29.844655 systemd[1]: cri-containerd-f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c.scope: Deactivated successfully. Sep 8 23:59:29.845651 containerd[1481]: time="2025-09-08T23:59:29.845573349Z" level=info msg="StopContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" returns successfully" Sep 8 23:59:29.848387 containerd[1481]: time="2025-09-08T23:59:29.848351357Z" level=info msg="StopPodSandbox for \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\"" Sep 8 23:59:29.848725 containerd[1481]: time="2025-09-08T23:59:29.848633701Z" level=info msg="Container to stop \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:59:29.853603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1-shm.mount: Deactivated successfully. Sep 8 23:59:29.865912 systemd[1]: cri-containerd-c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1.scope: Deactivated successfully. Sep 8 23:59:29.911049 containerd[1481]: time="2025-09-08T23:59:29.910966801Z" level=info msg="shim disconnected" id=f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c namespace=k8s.io Sep 8 23:59:29.911692 containerd[1481]: time="2025-09-08T23:59:29.911451640Z" level=warning msg="cleaning up after shim disconnected" id=f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c namespace=k8s.io Sep 8 23:59:29.911692 containerd[1481]: time="2025-09-08T23:59:29.911476066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:29.949192 containerd[1481]: time="2025-09-08T23:59:29.945952884Z" level=info msg="shim disconnected" id=c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1 namespace=k8s.io Sep 8 23:59:29.949192 containerd[1481]: time="2025-09-08T23:59:29.946012496Z" level=warning msg="cleaning up after shim disconnected" id=c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1 namespace=k8s.io Sep 8 23:59:29.949192 containerd[1481]: time="2025-09-08T23:59:29.946023677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:29.969878 containerd[1481]: time="2025-09-08T23:59:29.967007568Z" level=info msg="TearDown network for sandbox \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" successfully" Sep 8 23:59:29.969878 containerd[1481]: time="2025-09-08T23:59:29.969727606Z" level=info msg="StopPodSandbox for \"f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c\" returns successfully" Sep 8 23:59:29.994484 containerd[1481]: time="2025-09-08T23:59:29.994380043Z" level=info msg="TearDown network for sandbox \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\" successfully" Sep 8 23:59:29.994484 containerd[1481]: time="2025-09-08T23:59:29.994415431Z" level=info msg="StopPodSandbox for \"c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1\" returns successfully" Sep 8 23:59:30.141024 kubelet[2635]: I0908 23:59:30.136776 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-kernel\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.141024 kubelet[2635]: I0908 23:59:30.140243 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536e3284-6a59-4693-959a-966e4803b7db-cilium-config-path\") pod \"536e3284-6a59-4693-959a-966e4803b7db\" (UID: \"536e3284-6a59-4693-959a-966e4803b7db\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141595 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-kube-api-access-zhrlx\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141639 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-etc-cni-netd\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141664 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-run\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141685 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-lib-modules\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141709 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr6zb\" (UniqueName: \"kubernetes.io/projected/536e3284-6a59-4693-959a-966e4803b7db-kube-api-access-gr6zb\") pod \"536e3284-6a59-4693-959a-966e4803b7db\" (UID: \"536e3284-6a59-4693-959a-966e4803b7db\") " Sep 8 23:59:30.141764 kubelet[2635]: I0908 23:59:30.141734 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hubble-tls\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141754 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-bpf-maps\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141773 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-xtables-lock\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141792 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-cgroup\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141813 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-config-path\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141835 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2ee2c01-0f71-4999-820c-16f2c0d07c14-clustermesh-secrets\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142027 kubelet[2635]: I0908 23:59:30.141854 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hostproc\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142269 kubelet[2635]: I0908 23:59:30.141872 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cni-path\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142269 kubelet[2635]: I0908 23:59:30.141888 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-net\") pod \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\" (UID: \"d2ee2c01-0f71-4999-820c-16f2c0d07c14\") " Sep 8 23:59:30.142269 kubelet[2635]: I0908 23:59:30.136917 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.142269 kubelet[2635]: I0908 23:59:30.141963 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.146010 kubelet[2635]: I0908 23:59:30.144933 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.146010 kubelet[2635]: I0908 23:59:30.144997 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.146010 kubelet[2635]: I0908 23:59:30.145020 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.146010 kubelet[2635]: I0908 23:59:30.145040 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.147026 kubelet[2635]: I0908 23:59:30.146976 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.147612 kubelet[2635]: I0908 23:59:30.147182 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.147612 kubelet[2635]: I0908 23:59:30.147242 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.148829 kubelet[2635]: I0908 23:59:30.148319 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:59:30.149416 kubelet[2635]: I0908 23:59:30.149374 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-kube-api-access-zhrlx" (OuterVolumeSpecName: "kube-api-access-zhrlx") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "kube-api-access-zhrlx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:59:30.150698 kubelet[2635]: I0908 23:59:30.150656 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536e3284-6a59-4693-959a-966e4803b7db-kube-api-access-gr6zb" (OuterVolumeSpecName: "kube-api-access-gr6zb") pod "536e3284-6a59-4693-959a-966e4803b7db" (UID: "536e3284-6a59-4693-959a-966e4803b7db"). InnerVolumeSpecName "kube-api-access-gr6zb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:59:30.151575 kubelet[2635]: I0908 23:59:30.151521 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536e3284-6a59-4693-959a-966e4803b7db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "536e3284-6a59-4693-959a-966e4803b7db" (UID: "536e3284-6a59-4693-959a-966e4803b7db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:59:30.153387 kubelet[2635]: I0908 23:59:30.153336 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:59:30.153387 kubelet[2635]: I0908 23:59:30.153346 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ee2c01-0f71-4999-820c-16f2c0d07c14-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:59:30.153577 kubelet[2635]: I0908 23:59:30.153550 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2ee2c01-0f71-4999-820c-16f2c0d07c14" (UID: "d2ee2c01-0f71-4999-820c-16f2c0d07c14"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242171 2635 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242230 2635 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242244 2635 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gr6zb\" (UniqueName: \"kubernetes.io/projected/536e3284-6a59-4693-959a-966e4803b7db-kube-api-access-gr6zb\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242259 2635 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242270 2635 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242280 2635 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242290 2635 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242511 kubelet[2635]: I0908 23:59:30.242300 2635 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242310 2635 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2ee2c01-0f71-4999-820c-16f2c0d07c14-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242320 2635 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242330 2635 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242344 2635 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242354 2635 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242364 2635 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536e3284-6a59-4693-959a-966e4803b7db-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242374 2635 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhrlx\" (UniqueName: \"kubernetes.io/projected/d2ee2c01-0f71-4999-820c-16f2c0d07c14-kube-api-access-zhrlx\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.242976 kubelet[2635]: I0908 23:59:30.242384 2635 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2ee2c01-0f71-4999-820c-16f2c0d07c14-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:59:30.392736 kubelet[2635]: I0908 23:59:30.392117 2635 scope.go:117] "RemoveContainer" containerID="ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583" Sep 8 23:59:30.401016 systemd[1]: Removed slice kubepods-besteffort-pod536e3284_6a59_4693_959a_966e4803b7db.slice - libcontainer container kubepods-besteffort-pod536e3284_6a59_4693_959a_966e4803b7db.slice. Sep 8 23:59:30.409161 systemd[1]: Removed slice kubepods-burstable-podd2ee2c01_0f71_4999_820c_16f2c0d07c14.slice - libcontainer container kubepods-burstable-podd2ee2c01_0f71_4999_820c_16f2c0d07c14.slice. Sep 8 23:59:30.410351 systemd[1]: kubepods-burstable-podd2ee2c01_0f71_4999_820c_16f2c0d07c14.slice: Consumed 9.465s CPU time, 126M memory peak, 500K read from disk, 13.3M written to disk. Sep 8 23:59:30.412050 containerd[1481]: time="2025-09-08T23:59:30.411630870Z" level=info msg="RemoveContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\"" Sep 8 23:59:30.427529 containerd[1481]: time="2025-09-08T23:59:30.427469787Z" level=info msg="RemoveContainer for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" returns successfully" Sep 8 23:59:30.427928 kubelet[2635]: I0908 23:59:30.427892 2635 scope.go:117] "RemoveContainer" containerID="ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583" Sep 8 23:59:30.428579 containerd[1481]: time="2025-09-08T23:59:30.428436309Z" level=error msg="ContainerStatus for \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\": not found" Sep 8 23:59:30.429821 kubelet[2635]: E0908 23:59:30.429771 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\": not found" containerID="ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583" Sep 8 23:59:30.430094 kubelet[2635]: I0908 23:59:30.430010 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583"} err="failed to get container status \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec34038dd54d9e06db65916db9b6fddfc8227e9afd2e6f184735ceb19abe4583\": not found" Sep 8 23:59:30.430094 kubelet[2635]: I0908 23:59:30.430083 2635 scope.go:117] "RemoveContainer" containerID="9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4" Sep 8 23:59:30.432720 containerd[1481]: time="2025-09-08T23:59:30.432669525Z" level=info msg="RemoveContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\"" Sep 8 23:59:30.447500 containerd[1481]: time="2025-09-08T23:59:30.447343704Z" level=info msg="RemoveContainer for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" returns successfully" Sep 8 23:59:30.448809 kubelet[2635]: I0908 23:59:30.447967 2635 scope.go:117] "RemoveContainer" containerID="77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c" Sep 8 23:59:30.450726 containerd[1481]: time="2025-09-08T23:59:30.450443251Z" level=info msg="RemoveContainer for \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\"" Sep 8 23:59:30.460604 containerd[1481]: time="2025-09-08T23:59:30.458694183Z" level=info msg="RemoveContainer for \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\" returns successfully" Sep 8 23:59:30.462402 kubelet[2635]: I0908 23:59:30.462357 2635 scope.go:117] "RemoveContainer" containerID="874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f" Sep 8 23:59:30.478415 containerd[1481]: time="2025-09-08T23:59:30.477863966Z" level=info msg="RemoveContainer for \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\"" Sep 8 23:59:30.489163 containerd[1481]: time="2025-09-08T23:59:30.487400615Z" level=info msg="RemoveContainer for \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\" returns successfully" Sep 8 23:59:30.489439 kubelet[2635]: I0908 23:59:30.487780 2635 scope.go:117] "RemoveContainer" containerID="aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7" Sep 8 23:59:30.492440 containerd[1481]: time="2025-09-08T23:59:30.492393381Z" level=info msg="RemoveContainer for \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\"" Sep 8 23:59:30.503136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c26b733dd816075ed0d63e0350fd39bbacaa32e3425192f067e2b3123806a1b1-rootfs.mount: Deactivated successfully. Sep 8 23:59:30.503290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f528480c147169b81f27793d1b7c810513b976020ab5a4933fbe1563b9a21f6c-rootfs.mount: Deactivated successfully. Sep 8 23:59:30.503403 systemd[1]: var-lib-kubelet-pods-536e3284\x2d6a59\x2d4693\x2d959a\x2d966e4803b7db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgr6zb.mount: Deactivated successfully. Sep 8 23:59:30.504260 systemd[1]: var-lib-kubelet-pods-d2ee2c01\x2d0f71\x2d4999\x2d820c\x2d16f2c0d07c14-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:59:30.504396 systemd[1]: var-lib-kubelet-pods-d2ee2c01\x2d0f71\x2d4999\x2d820c\x2d16f2c0d07c14-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:59:30.504520 systemd[1]: var-lib-kubelet-pods-d2ee2c01\x2d0f71\x2d4999\x2d820c\x2d16f2c0d07c14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhrlx.mount: Deactivated successfully. Sep 8 23:59:30.511431 containerd[1481]: time="2025-09-08T23:59:30.510842316Z" level=info msg="RemoveContainer for \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\" returns successfully" Sep 8 23:59:30.511625 kubelet[2635]: I0908 23:59:30.511164 2635 scope.go:117] "RemoveContainer" containerID="7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2" Sep 8 23:59:30.513398 containerd[1481]: time="2025-09-08T23:59:30.513361352Z" level=info msg="RemoveContainer for \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\"" Sep 8 23:59:30.524522 containerd[1481]: time="2025-09-08T23:59:30.524427271Z" level=info msg="RemoveContainer for \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\" returns successfully" Sep 8 23:59:30.525442 kubelet[2635]: I0908 23:59:30.524826 2635 scope.go:117] "RemoveContainer" containerID="9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4" Sep 8 23:59:30.525615 containerd[1481]: time="2025-09-08T23:59:30.525328740Z" level=error msg="ContainerStatus for \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\": not found" Sep 8 23:59:30.526112 kubelet[2635]: E0908 23:59:30.525969 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\": not found" containerID="9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4" Sep 8 23:59:30.526112 kubelet[2635]: I0908 23:59:30.526016 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4"} err="failed to get container status \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9abe5c28d627b17df07361ec78380ff39ba8a33dfec8f0ceb63879c1dd52cdc4\": not found" Sep 8 23:59:30.526112 kubelet[2635]: I0908 23:59:30.526040 2635 scope.go:117] "RemoveContainer" containerID="77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c" Sep 8 23:59:30.526423 containerd[1481]: time="2025-09-08T23:59:30.526394390Z" level=error msg="ContainerStatus for \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\": not found" Sep 8 23:59:30.528253 kubelet[2635]: E0908 23:59:30.528144 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\": not found" containerID="77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c" Sep 8 23:59:30.528253 kubelet[2635]: I0908 23:59:30.528173 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c"} err="failed to get container status \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"77943a636d3e924abb896588b614c248d2a04682bb3e8d7be81ab8c5fa8d5a9c\": not found" Sep 8 23:59:30.528253 kubelet[2635]: I0908 23:59:30.528192 2635 scope.go:117] "RemoveContainer" containerID="874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f" Sep 8 23:59:30.530680 containerd[1481]: time="2025-09-08T23:59:30.528672309Z" level=error msg="ContainerStatus for \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\": not found" Sep 8 23:59:30.530680 containerd[1481]: time="2025-09-08T23:59:30.529510768Z" level=error msg="ContainerStatus for \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\": not found" Sep 8 23:59:30.530680 containerd[1481]: time="2025-09-08T23:59:30.529993704Z" level=error msg="ContainerStatus for \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\": not found" Sep 8 23:59:30.530814 kubelet[2635]: E0908 23:59:30.528795 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\": not found" containerID="874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f" Sep 8 23:59:30.530814 kubelet[2635]: I0908 23:59:30.528865 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f"} err="failed to get container status \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\": rpc error: code = NotFound desc = an error occurred when try to find container \"874bf8fefb10075bdb6082afe4659bc81b3611510f6ae5bd39165c447213510f\": not found" Sep 8 23:59:30.530814 kubelet[2635]: I0908 23:59:30.528885 2635 scope.go:117] "RemoveContainer" containerID="aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7" Sep 8 23:59:30.530814 kubelet[2635]: E0908 23:59:30.529705 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\": not found" containerID="aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7" Sep 8 23:59:30.530814 kubelet[2635]: I0908 23:59:30.529777 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7"} err="failed to get container status \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\": rpc error: code = NotFound desc = an error occurred when try to find container \"aac51ef3973b90052c1d4367eb915f97fc094017c6ccb6e95d08a94caff46ec7\": not found" Sep 8 23:59:30.530814 kubelet[2635]: I0908 23:59:30.529800 2635 scope.go:117] "RemoveContainer" containerID="7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2" Sep 8 23:59:30.531052 kubelet[2635]: E0908 23:59:30.530180 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\": not found" containerID="7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2" Sep 8 23:59:30.531052 kubelet[2635]: I0908 23:59:30.530228 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2"} err="failed to get container status \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7031d9dbcae581444d9e3d1cd2c47a8c7b45b8141049fb436a0958b996540eb2\": not found" Sep 8 23:59:30.642156 kubelet[2635]: E0908 23:59:30.640463 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:30.647457 kubelet[2635]: I0908 23:59:30.647296 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536e3284-6a59-4693-959a-966e4803b7db" path="/var/lib/kubelet/pods/536e3284-6a59-4693-959a-966e4803b7db/volumes" Sep 8 23:59:30.649102 kubelet[2635]: I0908 23:59:30.648103 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ee2c01-0f71-4999-820c-16f2c0d07c14" path="/var/lib/kubelet/pods/d2ee2c01-0f71-4999-820c-16f2c0d07c14/volumes" Sep 8 23:59:31.087311 sshd[4314]: Connection closed by 10.0.0.1 port 49438 Sep 8 23:59:31.090046 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:31.112523 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:49438.service: Deactivated successfully. Sep 8 23:59:31.121032 systemd[1]: session-27.scope: Deactivated successfully. Sep 8 23:59:31.122661 systemd[1]: session-27.scope: Consumed 1.351s CPU time, 32M memory peak. Sep 8 23:59:31.129235 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Sep 8 23:59:31.149981 systemd[1]: Started sshd@27-10.0.0.98:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Sep 8 23:59:31.155065 systemd-logind[1467]: Removed session 27. Sep 8 23:59:31.241727 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:31.242704 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:31.269710 systemd-logind[1467]: New session 28 of user core. Sep 8 23:59:31.281452 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 8 23:59:32.164372 sshd[4483]: Connection closed by 10.0.0.1 port 48626 Sep 8 23:59:32.168829 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:32.205484 systemd[1]: Started sshd@28-10.0.0.98:22-10.0.0.1:48636.service - OpenSSH per-connection server daemon (10.0.0.1:48636). Sep 8 23:59:32.206433 systemd[1]: sshd@27-10.0.0.98:22-10.0.0.1:48626.service: Deactivated successfully. Sep 8 23:59:32.213708 systemd[1]: session-28.scope: Deactivated successfully. Sep 8 23:59:32.218924 systemd-logind[1467]: Session 28 logged out. Waiting for processes to exit. Sep 8 23:59:32.222772 systemd-logind[1467]: Removed session 28. Sep 8 23:59:32.236831 systemd[1]: Created slice kubepods-burstable-pode1313cfe_39a7_41f2_b637_ce91ee726605.slice - libcontainer container kubepods-burstable-pode1313cfe_39a7_41f2_b637_ce91ee726605.slice. Sep 8 23:59:32.302293 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 48636 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:32.306287 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:32.318229 systemd-logind[1467]: New session 29 of user core. Sep 8 23:59:32.337251 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 8 23:59:32.377661 kubelet[2635]: I0908 23:59:32.377518 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-cni-path\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.377661 kubelet[2635]: I0908 23:59:32.377630 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-hostproc\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.377661 kubelet[2635]: I0908 23:59:32.377666 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-cilium-cgroup\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.377661 kubelet[2635]: I0908 23:59:32.377692 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc89s\" (UniqueName: \"kubernetes.io/projected/e1313cfe-39a7-41f2-b637-ce91ee726605-kube-api-access-tc89s\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378491 kubelet[2635]: I0908 23:59:32.377716 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1313cfe-39a7-41f2-b637-ce91ee726605-clustermesh-secrets\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378491 kubelet[2635]: I0908 23:59:32.377741 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-cilium-run\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378491 kubelet[2635]: I0908 23:59:32.377762 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-xtables-lock\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378491 kubelet[2635]: I0908 23:59:32.377782 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1313cfe-39a7-41f2-b637-ce91ee726605-cilium-config-path\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378491 kubelet[2635]: I0908 23:59:32.377887 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1313cfe-39a7-41f2-b637-ce91ee726605-cilium-ipsec-secrets\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.377955 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-host-proc-sys-kernel\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.377998 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-bpf-maps\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.378022 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-etc-cni-netd\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.378061 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-lib-modules\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.378105 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1313cfe-39a7-41f2-b637-ce91ee726605-host-proc-sys-net\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.378693 kubelet[2635]: I0908 23:59:32.378132 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1313cfe-39a7-41f2-b637-ce91ee726605-hubble-tls\") pod \"cilium-pp69n\" (UID: \"e1313cfe-39a7-41f2-b637-ce91ee726605\") " pod="kube-system/cilium-pp69n" Sep 8 23:59:32.401091 sshd[4497]: Connection closed by 10.0.0.1 port 48636 Sep 8 23:59:32.401989 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:32.423022 systemd[1]: Started sshd@29-10.0.0.98:22-10.0.0.1:48642.service - OpenSSH per-connection server daemon (10.0.0.1:48642). Sep 8 23:59:32.426095 systemd[1]: sshd@28-10.0.0.98:22-10.0.0.1:48636.service: Deactivated successfully. Sep 8 23:59:32.436686 systemd[1]: session-29.scope: Deactivated successfully. Sep 8 23:59:32.441634 systemd-logind[1467]: Session 29 logged out. Waiting for processes to exit. Sep 8 23:59:32.444698 systemd-logind[1467]: Removed session 29. Sep 8 23:59:32.496917 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 48642 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:59:32.500031 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:59:32.544868 systemd-logind[1467]: New session 30 of user core. Sep 8 23:59:32.547421 kubelet[2635]: E0908 23:59:32.545264 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:32.547529 containerd[1481]: time="2025-09-08T23:59:32.545902208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp69n,Uid:e1313cfe-39a7-41f2-b637-ce91ee726605,Namespace:kube-system,Attempt:0,}" Sep 8 23:59:32.557507 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 8 23:59:32.655983 containerd[1481]: time="2025-09-08T23:59:32.654455517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:59:32.655983 containerd[1481]: time="2025-09-08T23:59:32.654573079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:59:32.655983 containerd[1481]: time="2025-09-08T23:59:32.654595923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:59:32.655983 containerd[1481]: time="2025-09-08T23:59:32.654728073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:59:32.707928 systemd[1]: Started cri-containerd-4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755.scope - libcontainer container 4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755. Sep 8 23:59:32.765400 containerd[1481]: time="2025-09-08T23:59:32.765329883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp69n,Uid:e1313cfe-39a7-41f2-b637-ce91ee726605,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\"" Sep 8 23:59:32.767116 kubelet[2635]: E0908 23:59:32.766408 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:32.793101 containerd[1481]: time="2025-09-08T23:59:32.792292788Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:59:32.857323 containerd[1481]: time="2025-09-08T23:59:32.857138982Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f\"" Sep 8 23:59:32.862896 containerd[1481]: time="2025-09-08T23:59:32.862655896Z" level=info msg="StartContainer for \"b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f\"" Sep 8 23:59:32.921882 systemd[1]: Started cri-containerd-b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f.scope - libcontainer container b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f. Sep 8 23:59:32.984018 containerd[1481]: time="2025-09-08T23:59:32.983850249Z" level=info msg="StartContainer for \"b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f\" returns successfully" Sep 8 23:59:33.007406 systemd[1]: cri-containerd-b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f.scope: Deactivated successfully. Sep 8 23:59:33.100958 containerd[1481]: time="2025-09-08T23:59:33.100862407Z" level=info msg="shim disconnected" id=b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f namespace=k8s.io Sep 8 23:59:33.100958 containerd[1481]: time="2025-09-08T23:59:33.100939644Z" level=warning msg="cleaning up after shim disconnected" id=b77ee95f2d4dd2f0da1c759e7d82ed6fe8cf4ea6d990c1dd3f9580864ad5be8f namespace=k8s.io Sep 8 23:59:33.100958 containerd[1481]: time="2025-09-08T23:59:33.100956887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:33.421750 kubelet[2635]: E0908 23:59:33.419963 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:33.444119 containerd[1481]: time="2025-09-08T23:59:33.440114867Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:59:33.531265 containerd[1481]: time="2025-09-08T23:59:33.531125916Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be\"" Sep 8 23:59:33.532627 containerd[1481]: time="2025-09-08T23:59:33.532581575Z" level=info msg="StartContainer for \"b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be\"" Sep 8 23:59:33.633931 systemd[1]: Started cri-containerd-b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be.scope - libcontainer container b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be. Sep 8 23:59:33.707627 kubelet[2635]: E0908 23:59:33.705947 2635 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:59:33.716170 containerd[1481]: time="2025-09-08T23:59:33.714009730Z" level=info msg="StartContainer for \"b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be\" returns successfully" Sep 8 23:59:33.735982 systemd[1]: cri-containerd-b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be.scope: Deactivated successfully. Sep 8 23:59:33.798231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be-rootfs.mount: Deactivated successfully. Sep 8 23:59:33.815721 containerd[1481]: time="2025-09-08T23:59:33.815631913Z" level=info msg="shim disconnected" id=b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be namespace=k8s.io Sep 8 23:59:33.815721 containerd[1481]: time="2025-09-08T23:59:33.815712677Z" level=warning msg="cleaning up after shim disconnected" id=b243f82b2618405c5143a47402e29ea3a777bd34b4811c2e0d516a59482448be namespace=k8s.io Sep 8 23:59:33.815721 containerd[1481]: time="2025-09-08T23:59:33.815725090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:34.433959 kubelet[2635]: E0908 23:59:34.428121 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:34.475168 containerd[1481]: time="2025-09-08T23:59:34.471440263Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:59:34.511452 containerd[1481]: time="2025-09-08T23:59:34.511373196Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f\"" Sep 8 23:59:34.512841 containerd[1481]: time="2025-09-08T23:59:34.512787264Z" level=info msg="StartContainer for \"f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f\"" Sep 8 23:59:34.596946 systemd[1]: Started cri-containerd-f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f.scope - libcontainer container f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f. Sep 8 23:59:34.648906 kubelet[2635]: E0908 23:59:34.647774 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:34.699283 systemd[1]: cri-containerd-f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f.scope: Deactivated successfully. Sep 8 23:59:34.700947 containerd[1481]: time="2025-09-08T23:59:34.700911975Z" level=info msg="StartContainer for \"f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f\" returns successfully" Sep 8 23:59:34.818907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f-rootfs.mount: Deactivated successfully. Sep 8 23:59:34.831605 containerd[1481]: time="2025-09-08T23:59:34.831249674Z" level=info msg="shim disconnected" id=f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f namespace=k8s.io Sep 8 23:59:34.831605 containerd[1481]: time="2025-09-08T23:59:34.831318955Z" level=warning msg="cleaning up after shim disconnected" id=f15de31e7bc10b878bdb5a303908c49f056893e05c2bbe231b6127d0b485796f namespace=k8s.io Sep 8 23:59:34.831605 containerd[1481]: time="2025-09-08T23:59:34.831331940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:35.439125 kubelet[2635]: E0908 23:59:35.438525 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:35.466900 containerd[1481]: time="2025-09-08T23:59:35.466839756Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:59:35.533597 containerd[1481]: time="2025-09-08T23:59:35.533448413Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640\"" Sep 8 23:59:35.539580 containerd[1481]: time="2025-09-08T23:59:35.539447975Z" level=info msg="StartContainer for \"a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640\"" Sep 8 23:59:35.648894 systemd[1]: Started cri-containerd-a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640.scope - libcontainer container a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640. Sep 8 23:59:35.728659 systemd[1]: cri-containerd-a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640.scope: Deactivated successfully. Sep 8 23:59:35.737492 containerd[1481]: time="2025-09-08T23:59:35.736615192Z" level=info msg="StartContainer for \"a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640\" returns successfully" Sep 8 23:59:35.790145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640-rootfs.mount: Deactivated successfully. Sep 8 23:59:35.806088 containerd[1481]: time="2025-09-08T23:59:35.802568068Z" level=info msg="shim disconnected" id=a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640 namespace=k8s.io Sep 8 23:59:35.806088 containerd[1481]: time="2025-09-08T23:59:35.802636858Z" level=warning msg="cleaning up after shim disconnected" id=a1f1a687c8346c5a02423f8e90f42334200b4a8ceda651ba593c82d4d7f00640 namespace=k8s.io Sep 8 23:59:35.806088 containerd[1481]: time="2025-09-08T23:59:35.802649242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:59:36.458864 kubelet[2635]: E0908 23:59:36.458800 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:36.485518 containerd[1481]: time="2025-09-08T23:59:36.485436753Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:59:36.566590 containerd[1481]: time="2025-09-08T23:59:36.566494908Z" level=info msg="CreateContainer within sandbox \"4b6af3defea05861c082d24878e3fa930b973feb52ec96990c103ec135834755\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49\"" Sep 8 23:59:36.569093 containerd[1481]: time="2025-09-08T23:59:36.567615079Z" level=info msg="StartContainer for \"96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49\"" Sep 8 23:59:36.640959 systemd[1]: Started cri-containerd-96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49.scope - libcontainer container 96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49. Sep 8 23:59:36.766279 containerd[1481]: time="2025-09-08T23:59:36.754022862Z" level=info msg="StartContainer for \"96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49\" returns successfully" Sep 8 23:59:37.475848 kubelet[2635]: E0908 23:59:37.472731 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:37.524849 kubelet[2635]: I0908 23:59:37.524208 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pp69n" podStartSLOduration=5.524182271 podStartE2EDuration="5.524182271s" podCreationTimestamp="2025-09-08 23:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:59:37.519319364 +0000 UTC m=+98.977144359" watchObservedRunningTime="2025-09-08 23:59:37.524182271 +0000 UTC m=+98.982007245" Sep 8 23:59:37.924111 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 8 23:59:38.549982 kubelet[2635]: E0908 23:59:38.549914 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:41.504682 systemd[1]: run-containerd-runc-k8s.io-96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49-runc.eVZ1oB.mount: Deactivated successfully. Sep 8 23:59:41.641644 kubelet[2635]: E0908 23:59:41.640096 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:43.105362 systemd-networkd[1396]: lxc_health: Link UP Sep 8 23:59:43.162025 systemd-networkd[1396]: lxc_health: Gained carrier Sep 8 23:59:43.817099 systemd[1]: run-containerd-runc-k8s.io-96f6cf0fb25702b499b4d6b22302adea1b64e1f09eb4abed189504c324314b49-runc.7qED3A.mount: Deactivated successfully. Sep 8 23:59:44.551701 kubelet[2635]: E0908 23:59:44.551642 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:45.160872 systemd-networkd[1396]: lxc_health: Gained IPv6LL Sep 8 23:59:45.510072 kubelet[2635]: E0908 23:59:45.510017 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:46.517968 kubelet[2635]: E0908 23:59:46.511974 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:59:50.798660 sshd[4514]: Connection closed by 10.0.0.1 port 48642 Sep 8 23:59:50.801515 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Sep 8 23:59:50.810338 systemd[1]: sshd@29-10.0.0.98:22-10.0.0.1:48642.service: Deactivated successfully. Sep 8 23:59:50.819906 systemd[1]: session-30.scope: Deactivated successfully. Sep 8 23:59:50.826644 systemd-logind[1467]: Session 30 logged out. Waiting for processes to exit. Sep 8 23:59:50.833945 systemd-logind[1467]: Removed session 30.