Sep 4 23:51:29.911030 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:51:29.911053 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:51:29.911064 kernel: BIOS-provided physical RAM map: Sep 4 23:51:29.911071 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 23:51:29.911077 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 23:51:29.911084 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 23:51:29.911091 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 23:51:29.911098 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 23:51:29.911105 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 23:51:29.911111 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 23:51:29.911118 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 4 23:51:29.911127 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 23:51:29.911133 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 23:51:29.911140 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 23:51:29.911148 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 23:51:29.911156 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 23:51:29.911165 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 4 23:51:29.911172 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 4 23:51:29.911179 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 4 23:51:29.911186 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 4 23:51:29.911193 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 23:51:29.911200 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 23:51:29.911207 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:51:29.911214 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:51:29.911221 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 23:51:29.911228 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:51:29.911236 kernel: NX (Execute Disable) protection: active Sep 4 23:51:29.911245 kernel: APIC: Static calls initialized Sep 4 23:51:29.911252 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 4 23:51:29.911259 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 4 23:51:29.911266 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 4 23:51:29.911273 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 4 23:51:29.911280 kernel: extended physical RAM map: Sep 4 23:51:29.911287 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 23:51:29.911294 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 23:51:29.911301 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 23:51:29.911308 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 23:51:29.911315 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 23:51:29.911322 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 23:51:29.911332 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 23:51:29.911342 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 4 23:51:29.911350 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 4 23:51:29.911357 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 4 23:51:29.911364 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 4 23:51:29.911371 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 4 23:51:29.911381 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 23:51:29.911389 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 23:51:29.911396 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 23:51:29.911403 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 23:51:29.911411 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 23:51:29.911418 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 4 23:51:29.911425 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 4 23:51:29.911433 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 4 23:51:29.911440 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 4 23:51:29.911450 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 23:51:29.911457 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 23:51:29.911465 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:51:29.911472 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:51:29.911480 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 23:51:29.911487 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:51:29.911494 kernel: efi: EFI v2.7 by EDK II Sep 4 23:51:29.911502 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 4 23:51:29.911509 kernel: random: crng init done Sep 4 23:51:29.911516 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 4 23:51:29.911524 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 4 23:51:29.911531 kernel: secureboot: Secure boot disabled Sep 4 23:51:29.911541 kernel: SMBIOS 2.8 present. Sep 4 23:51:29.911548 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 23:51:29.911556 kernel: Hypervisor detected: KVM Sep 4 23:51:29.911571 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:51:29.911579 kernel: kvm-clock: using sched offset of 3744306444 cycles Sep 4 23:51:29.911587 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:51:29.911603 kernel: tsc: Detected 2794.748 MHz processor Sep 4 23:51:29.911613 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:51:29.911621 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:51:29.911629 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 4 23:51:29.911641 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 23:51:29.911649 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:51:29.911656 kernel: Using GB pages for direct mapping Sep 4 23:51:29.911664 kernel: ACPI: Early table checksum verification disabled Sep 4 23:51:29.911672 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 23:51:29.911679 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 23:51:29.911687 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911695 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911702 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 23:51:29.911712 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911719 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911727 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911734 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:51:29.911742 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 23:51:29.911749 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 4 23:51:29.911757 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 4 23:51:29.911764 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 23:51:29.911772 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 4 23:51:29.911781 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 4 23:51:29.911789 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 4 23:51:29.911796 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 4 23:51:29.911804 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 4 23:51:29.911811 kernel: No NUMA configuration found Sep 4 23:51:29.911819 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 4 23:51:29.911826 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 4 23:51:29.911834 kernel: Zone ranges: Sep 4 23:51:29.911841 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:51:29.911851 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 4 23:51:29.911859 kernel: Normal empty Sep 4 23:51:29.911866 kernel: Movable zone start for each node Sep 4 23:51:29.911873 kernel: Early memory node ranges Sep 4 23:51:29.911881 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 23:51:29.911888 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 23:51:29.911895 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 23:51:29.911903 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 4 23:51:29.911910 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 4 23:51:29.911918 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 4 23:51:29.911927 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 4 23:51:29.911935 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 4 23:51:29.911942 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 4 23:51:29.911949 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:51:29.911957 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 23:51:29.911985 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 23:51:29.911996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:51:29.912003 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 4 23:51:29.912011 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 4 23:51:29.912019 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 23:51:29.912027 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 23:51:29.912034 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 4 23:51:29.912044 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:51:29.912052 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:51:29.912060 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:51:29.912068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:51:29.912078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:51:29.912086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:51:29.912093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:51:29.912101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:51:29.912109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:51:29.912117 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:51:29.912124 kernel: TSC deadline timer available Sep 4 23:51:29.912132 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 23:51:29.912140 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:51:29.912148 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 23:51:29.912157 kernel: kvm-guest: setup PV sched yield Sep 4 23:51:29.912165 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 23:51:29.912173 kernel: Booting paravirtualized kernel on KVM Sep 4 23:51:29.912181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:51:29.912189 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 23:51:29.912197 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 4 23:51:29.912205 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 4 23:51:29.912212 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 23:51:29.912220 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:51:29.912230 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:51:29.912239 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:51:29.912247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:51:29.912255 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:51:29.912263 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:51:29.912271 kernel: Fallback order for Node 0: 0 Sep 4 23:51:29.912278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 4 23:51:29.912286 kernel: Policy zone: DMA32 Sep 4 23:51:29.912296 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:51:29.912304 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 177824K reserved, 0K cma-reserved) Sep 4 23:51:29.912312 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 23:51:29.912320 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:51:29.912328 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:51:29.912336 kernel: Dynamic Preempt: voluntary Sep 4 23:51:29.912343 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:51:29.912352 kernel: rcu: RCU event tracing is enabled. Sep 4 23:51:29.912360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 23:51:29.912370 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:51:29.912378 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:51:29.912386 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:51:29.912393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:51:29.912401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 23:51:29.912409 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 23:51:29.912417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:51:29.912424 kernel: Console: colour dummy device 80x25 Sep 4 23:51:29.912432 kernel: printk: console [ttyS0] enabled Sep 4 23:51:29.912442 kernel: ACPI: Core revision 20230628 Sep 4 23:51:29.912450 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:51:29.912458 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:51:29.912466 kernel: x2apic enabled Sep 4 23:51:29.912473 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:51:29.912481 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 23:51:29.912489 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 23:51:29.912497 kernel: kvm-guest: setup PV IPIs Sep 4 23:51:29.912505 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:51:29.912515 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 23:51:29.912522 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 23:51:29.912530 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 23:51:29.912538 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 23:51:29.912546 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 23:51:29.912553 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:51:29.912568 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:51:29.912577 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:51:29.912585 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 23:51:29.912595 kernel: active return thunk: retbleed_return_thunk Sep 4 23:51:29.912602 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 23:51:29.912610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:51:29.912618 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:51:29.912626 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 23:51:29.912634 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 23:51:29.912642 kernel: active return thunk: srso_return_thunk Sep 4 23:51:29.912650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 23:51:29.912660 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:51:29.912668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:51:29.912676 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:51:29.912684 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:51:29.912691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 23:51:29.912699 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:51:29.912707 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:51:29.912715 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:51:29.912722 kernel: landlock: Up and running. Sep 4 23:51:29.912732 kernel: SELinux: Initializing. Sep 4 23:51:29.912740 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:51:29.912748 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:51:29.912756 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 23:51:29.912768 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:51:29.912792 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:51:29.912809 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:51:29.912832 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 23:51:29.912840 kernel: ... version: 0 Sep 4 23:51:29.912851 kernel: ... bit width: 48 Sep 4 23:51:29.912859 kernel: ... generic registers: 6 Sep 4 23:51:29.912867 kernel: ... value mask: 0000ffffffffffff Sep 4 23:51:29.912875 kernel: ... max period: 00007fffffffffff Sep 4 23:51:29.912882 kernel: ... fixed-purpose events: 0 Sep 4 23:51:29.912890 kernel: ... event mask: 000000000000003f Sep 4 23:51:29.912898 kernel: signal: max sigframe size: 1776 Sep 4 23:51:29.912906 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:51:29.912919 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:51:29.912930 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:51:29.912993 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:51:29.913003 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 23:51:29.913010 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 23:51:29.913018 kernel: smpboot: Max logical packages: 1 Sep 4 23:51:29.913026 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 23:51:29.913034 kernel: devtmpfs: initialized Sep 4 23:51:29.913041 kernel: x86/mm: Memory block size: 128MB Sep 4 23:51:29.913049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 23:51:29.913057 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 23:51:29.913068 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 4 23:51:29.913076 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 23:51:29.913084 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 4 23:51:29.913092 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 23:51:29.913100 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:51:29.913108 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 23:51:29.913116 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:51:29.913124 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:51:29.913134 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:51:29.913142 kernel: audit: type=2000 audit(1757029889.643:1): state=initialized audit_enabled=0 res=1 Sep 4 23:51:29.913149 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:51:29.913158 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:51:29.913165 kernel: cpuidle: using governor menu Sep 4 23:51:29.913173 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:51:29.913181 kernel: dca service started, version 1.12.1 Sep 4 23:51:29.913189 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 4 23:51:29.913197 kernel: PCI: Using configuration type 1 for base access Sep 4 23:51:29.913207 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:51:29.913215 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:51:29.913223 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:51:29.913231 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:51:29.913239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:51:29.913246 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:51:29.913254 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:51:29.913262 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:51:29.913270 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:51:29.913280 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:51:29.913288 kernel: ACPI: Interpreter enabled Sep 4 23:51:29.913296 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 23:51:29.913304 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:51:29.913311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:51:29.913319 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:51:29.913327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 23:51:29.913335 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:51:29.913604 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:51:29.913785 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 23:51:29.913930 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 23:51:29.913941 kernel: PCI host bridge to bus 0000:00 Sep 4 23:51:29.914105 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:51:29.914226 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:51:29.914345 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:51:29.914469 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 23:51:29.914600 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 23:51:29.914730 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:51:29.914851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:51:29.915022 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 4 23:51:29.915164 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 4 23:51:29.915295 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 23:51:29.915432 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 4 23:51:29.915571 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 23:51:29.915703 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 4 23:51:29.915832 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:51:29.915985 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 23:51:29.916124 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 4 23:51:29.916261 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 4 23:51:29.916391 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 23:51:29.916536 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:51:29.916679 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 4 23:51:29.916811 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 23:51:29.916942 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 23:51:29.917120 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:51:29.917263 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 4 23:51:29.917394 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 23:51:29.917525 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 23:51:29.917669 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 23:51:29.917809 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 4 23:51:29.917940 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 23:51:29.918123 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 4 23:51:29.918277 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 4 23:51:29.918407 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 4 23:51:29.918545 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 4 23:51:29.918690 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 4 23:51:29.918702 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:51:29.918710 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:51:29.918718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:51:29.918731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:51:29.918739 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 23:51:29.918747 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 23:51:29.918755 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 23:51:29.918763 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 23:51:29.918771 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 23:51:29.918778 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 23:51:29.918786 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 23:51:29.918794 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 23:51:29.918805 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 23:51:29.918812 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 23:51:29.918820 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 23:51:29.918828 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 23:51:29.918836 kernel: iommu: Default domain type: Translated Sep 4 23:51:29.918844 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:51:29.918852 kernel: efivars: Registered efivars operations Sep 4 23:51:29.918859 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:51:29.918867 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:51:29.918875 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 23:51:29.918885 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 4 23:51:29.918893 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 4 23:51:29.918900 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 4 23:51:29.918908 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 4 23:51:29.918916 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 4 23:51:29.918924 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 4 23:51:29.918931 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 4 23:51:29.919093 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 23:51:29.919230 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 23:51:29.919360 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:51:29.919371 kernel: vgaarb: loaded Sep 4 23:51:29.919379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:51:29.919387 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:51:29.919395 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:51:29.919403 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:51:29.919411 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:51:29.919419 kernel: pnp: PnP ACPI init Sep 4 23:51:29.919580 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 23:51:29.919593 kernel: pnp: PnP ACPI: found 6 devices Sep 4 23:51:29.919601 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:51:29.919610 kernel: NET: Registered PF_INET protocol family Sep 4 23:51:29.919638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:51:29.919649 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:51:29.919658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:51:29.919666 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:51:29.919677 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:51:29.919685 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:51:29.919693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:51:29.919702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:51:29.919710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:51:29.919718 kernel: NET: Registered PF_XDP protocol family Sep 4 23:51:29.919856 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 23:51:29.920056 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 23:51:29.920186 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:51:29.920305 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:51:29.920422 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:51:29.920539 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 23:51:29.920665 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 23:51:29.920782 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:51:29.920793 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:51:29.920801 kernel: Initialise system trusted keyrings Sep 4 23:51:29.920814 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:51:29.920823 kernel: Key type asymmetric registered Sep 4 23:51:29.920831 kernel: Asymmetric key parser 'x509' registered Sep 4 23:51:29.920839 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:51:29.920847 kernel: io scheduler mq-deadline registered Sep 4 23:51:29.920855 kernel: io scheduler kyber registered Sep 4 23:51:29.920863 kernel: io scheduler bfq registered Sep 4 23:51:29.920871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:51:29.920880 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 23:51:29.920891 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 23:51:29.920902 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 23:51:29.920911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:51:29.920922 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:51:29.920933 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:51:29.920943 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:51:29.920955 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:51:29.921120 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 23:51:29.921133 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:51:29.921253 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 23:51:29.921376 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T23:51:29 UTC (1757029889) Sep 4 23:51:29.921498 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 23:51:29.921509 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 23:51:29.921517 kernel: efifb: probing for efifb Sep 4 23:51:29.921531 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 23:51:29.921539 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 23:51:29.921547 kernel: efifb: scrolling: redraw Sep 4 23:51:29.921555 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:51:29.921573 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 23:51:29.921581 kernel: fb0: EFI VGA frame buffer device Sep 4 23:51:29.921589 kernel: pstore: Using crash dump compression: deflate Sep 4 23:51:29.921598 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 23:51:29.921606 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:51:29.921617 kernel: Segment Routing with IPv6 Sep 4 23:51:29.921625 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:51:29.921633 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:51:29.921641 kernel: Key type dns_resolver registered Sep 4 23:51:29.921649 kernel: IPI shorthand broadcast: enabled Sep 4 23:51:29.921657 kernel: sched_clock: Marking stable (639003236, 146654672)->(800930922, -15273014) Sep 4 23:51:29.921668 kernel: registered taskstats version 1 Sep 4 23:51:29.921676 kernel: Loading compiled-in X.509 certificates Sep 4 23:51:29.921684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:51:29.921695 kernel: Key type .fscrypt registered Sep 4 23:51:29.921703 kernel: Key type fscrypt-provisioning registered Sep 4 23:51:29.921711 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:51:29.921719 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:51:29.921727 kernel: ima: No architecture policies found Sep 4 23:51:29.921735 kernel: clk: Disabling unused clocks Sep 4 23:51:29.921743 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:51:29.921751 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:51:29.921759 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:51:29.921770 kernel: Run /init as init process Sep 4 23:51:29.921778 kernel: with arguments: Sep 4 23:51:29.921786 kernel: /init Sep 4 23:51:29.921794 kernel: with environment: Sep 4 23:51:29.921802 kernel: HOME=/ Sep 4 23:51:29.921810 kernel: TERM=linux Sep 4 23:51:29.921818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:51:29.921827 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:51:29.921841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:51:29.921850 systemd[1]: Detected virtualization kvm. Sep 4 23:51:29.921859 systemd[1]: Detected architecture x86-64. Sep 4 23:51:29.921867 systemd[1]: Running in initrd. Sep 4 23:51:29.921875 systemd[1]: No hostname configured, using default hostname. Sep 4 23:51:29.921884 systemd[1]: Hostname set to . Sep 4 23:51:29.921892 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:51:29.921901 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:51:29.921913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:51:29.921921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:51:29.921930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:51:29.921939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:51:29.921948 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:51:29.921957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:51:29.921967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:51:29.922038 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:51:29.922047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:51:29.922056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:51:29.922065 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:51:29.922073 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:51:29.922082 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:51:29.922090 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:51:29.922099 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:51:29.922111 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:51:29.922120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:51:29.922128 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:51:29.922137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:51:29.922146 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:51:29.922155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:51:29.922163 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:51:29.922172 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:51:29.922181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:51:29.922192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:51:29.922201 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:51:29.922209 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:51:29.922218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:51:29.922226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:29.922235 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:51:29.922243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:51:29.922255 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:51:29.922291 systemd-journald[194]: Collecting audit messages is disabled. Sep 4 23:51:29.922315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:51:29.922324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:29.922333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:51:29.922342 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:51:29.922351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:51:29.922360 systemd-journald[194]: Journal started Sep 4 23:51:29.922381 systemd-journald[194]: Runtime Journal (/run/log/journal/4f78ea9f3dc54dd1b47693fabe015ab4) is 6M, max 48.2M, 42.2M free. Sep 4 23:51:29.911637 systemd-modules-load[195]: Inserted module 'overlay' Sep 4 23:51:29.927222 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:51:29.932165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:51:29.934811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:51:29.942099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:51:29.943880 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 4 23:51:29.944032 kernel: Bridge firewalling registered Sep 4 23:51:29.945365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:51:29.946886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:51:29.947155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:51:29.953109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:51:29.954278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:51:29.961923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:51:29.963902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:51:29.971441 dracut-cmdline[226]: dracut-dracut-053 Sep 4 23:51:29.978908 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:51:30.016997 systemd-resolved[233]: Positive Trust Anchors: Sep 4 23:51:30.017016 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:51:30.017051 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:51:30.019730 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 4 23:51:30.020959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:51:30.026509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:51:30.061017 kernel: SCSI subsystem initialized Sep 4 23:51:30.070000 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:51:30.081010 kernel: iscsi: registered transport (tcp) Sep 4 23:51:30.102110 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:51:30.102190 kernel: QLogic iSCSI HBA Driver Sep 4 23:51:30.153489 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:51:30.166149 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:51:30.190563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:51:30.190602 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:51:30.190622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:51:30.232008 kernel: raid6: avx2x4 gen() 30225 MB/s Sep 4 23:51:30.248997 kernel: raid6: avx2x2 gen() 31152 MB/s Sep 4 23:51:30.266112 kernel: raid6: avx2x1 gen() 25836 MB/s Sep 4 23:51:30.266134 kernel: raid6: using algorithm avx2x2 gen() 31152 MB/s Sep 4 23:51:30.284261 kernel: raid6: .... xor() 17916 MB/s, rmw enabled Sep 4 23:51:30.284287 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:51:30.307999 kernel: xor: automatically using best checksumming function avx Sep 4 23:51:30.468021 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:51:30.483738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:51:30.499265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:51:30.521150 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 4 23:51:30.526631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:51:30.540158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:51:30.556256 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Sep 4 23:51:30.590110 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:51:30.599144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:51:30.667189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:51:30.670176 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:51:30.687332 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:51:30.689740 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:51:30.693776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:51:30.696959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:51:30.707482 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:51:30.721020 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 23:51:30.725017 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 23:51:30.727091 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:51:30.734404 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:51:30.734435 kernel: GPT:9289727 != 19775487 Sep 4 23:51:30.734446 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:51:30.734456 kernel: GPT:9289727 != 19775487 Sep 4 23:51:30.734467 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:51:30.734487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:51:30.734498 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:51:30.749888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:51:30.750221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:51:30.753631 kernel: libata version 3.00 loaded. Sep 4 23:51:30.753894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:51:30.755093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:51:30.755262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:30.759122 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:30.765993 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 23:51:30.769023 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 23:51:30.771486 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 4 23:51:30.771689 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (469) Sep 4 23:51:30.771701 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 23:51:30.775335 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:51:30.775358 kernel: AES CTR mode by8 optimization enabled Sep 4 23:51:30.775369 kernel: scsi host0: ahci Sep 4 23:51:30.775289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:30.780610 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (462) Sep 4 23:51:30.780626 kernel: scsi host1: ahci Sep 4 23:51:30.780831 kernel: scsi host2: ahci Sep 4 23:51:30.781210 kernel: scsi host3: ahci Sep 4 23:51:30.781382 kernel: scsi host4: ahci Sep 4 23:51:30.790009 kernel: scsi host5: ahci Sep 4 23:51:30.790197 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 4 23:51:30.790209 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 4 23:51:30.790219 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 4 23:51:30.790235 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 4 23:51:30.790245 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 4 23:51:30.790270 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 4 23:51:30.821244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:51:30.835058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:51:30.842170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:51:30.845242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:51:30.855263 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:51:30.868101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:51:30.870232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:51:30.870314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:30.872495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:30.875675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:30.879360 disk-uuid[558]: Primary Header is updated. Sep 4 23:51:30.879360 disk-uuid[558]: Secondary Entries is updated. Sep 4 23:51:30.879360 disk-uuid[558]: Secondary Header is updated. Sep 4 23:51:30.882697 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:51:30.888992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:51:30.892510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:30.904185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:51:30.929809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:51:31.099118 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 23:51:31.099174 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 23:51:31.099186 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 23:51:31.099995 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 23:51:31.101013 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 23:51:31.101999 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 23:51:31.103013 kernel: ata3.00: applying bridge limits Sep 4 23:51:31.103041 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 23:51:31.104009 kernel: ata3.00: configured for UDMA/100 Sep 4 23:51:31.105995 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:51:31.154465 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 23:51:31.154692 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:51:31.169004 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:51:31.889011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:51:31.889477 disk-uuid[560]: The operation has completed successfully. Sep 4 23:51:31.919618 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:51:31.919750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:51:31.976211 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:51:31.979891 sh[600]: Success Sep 4 23:51:31.993013 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 23:51:32.029135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:51:32.043540 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:51:32.046759 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:51:32.057639 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:51:32.057671 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:51:32.057683 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:51:32.059311 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:51:32.059327 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:51:32.063649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:51:32.065214 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:51:32.073116 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:51:32.074702 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:51:32.092669 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:51:32.092698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:51:32.092714 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:51:32.096169 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:51:32.099994 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:51:32.105472 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:51:32.113177 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:51:32.260016 ignition[690]: Ignition 2.20.0 Sep 4 23:51:32.260040 ignition[690]: Stage: fetch-offline Sep 4 23:51:32.260096 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:32.260108 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:32.260321 ignition[690]: parsed url from cmdline: "" Sep 4 23:51:32.260325 ignition[690]: no config URL provided Sep 4 23:51:32.260331 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:51:32.260341 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:51:32.260380 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 4 23:51:32.260386 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 23:51:32.269714 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 4 23:51:32.277779 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:51:32.293141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:51:32.311551 ignition[690]: parsing config with SHA512: 56f3b2ef2da8ae2b7118b78c640dbab442b2cb97bd05c1f4519b66636bbc4ab598559277a1026bac7c07b276f9ab6e8bae8e045b5b40442c791b0692e0be7486 Sep 4 23:51:32.316503 unknown[690]: fetched base config from "system" Sep 4 23:51:32.316517 unknown[690]: fetched user config from "qemu" Sep 4 23:51:32.327863 systemd-networkd[787]: lo: Link UP Sep 4 23:51:32.327874 systemd-networkd[787]: lo: Gained carrier Sep 4 23:51:32.331338 systemd-networkd[787]: Enumeration completed Sep 4 23:51:32.331854 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:51:32.334357 systemd[1]: Reached target network.target - Network. Sep 4 23:51:32.336506 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:51:32.336517 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:51:32.340805 systemd-networkd[787]: eth0: Link UP Sep 4 23:51:32.340816 systemd-networkd[787]: eth0: Gained carrier Sep 4 23:51:32.340822 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:51:32.391168 ignition[690]: fetch-offline: fetch-offline passed Sep 4 23:51:32.392328 ignition[690]: Ignition finished successfully Sep 4 23:51:32.394944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:51:32.396354 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 23:51:32.408136 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:51:32.412039 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:51:32.427156 ignition[791]: Ignition 2.20.0 Sep 4 23:51:32.427167 ignition[791]: Stage: kargs Sep 4 23:51:32.427351 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:32.427366 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:32.428369 ignition[791]: kargs: kargs passed Sep 4 23:51:32.428413 ignition[791]: Ignition finished successfully Sep 4 23:51:32.432111 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:51:32.440235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:51:32.459255 ignition[801]: Ignition 2.20.0 Sep 4 23:51:32.459265 ignition[801]: Stage: disks Sep 4 23:51:32.459461 ignition[801]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:32.459475 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:32.460356 ignition[801]: disks: disks passed Sep 4 23:51:32.460401 ignition[801]: Ignition finished successfully Sep 4 23:51:32.465826 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:51:32.467084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:51:32.468865 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:51:32.468940 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:51:32.469270 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:51:32.469597 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:51:32.480130 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:51:32.493969 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:51:32.500368 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:51:32.514075 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:51:32.667998 kernel: EXT4-fs (vda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:51:32.668535 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:51:32.669286 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:51:32.684093 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:51:32.685009 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:51:32.686745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:51:32.686784 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:51:32.686807 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:51:32.696579 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:51:32.698132 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:51:32.705046 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (820) Sep 4 23:51:32.705118 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:51:32.707195 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:51:32.707219 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:51:32.711335 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:51:32.712952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:51:32.739350 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:51:32.744500 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:51:32.749549 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:51:32.753507 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:51:32.843103 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:51:32.868118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:51:32.869992 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:51:32.878002 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:51:32.895758 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:51:32.906183 ignition[933]: INFO : Ignition 2.20.0 Sep 4 23:51:32.906183 ignition[933]: INFO : Stage: mount Sep 4 23:51:32.907821 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:32.907821 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:32.910597 ignition[933]: INFO : mount: mount passed Sep 4 23:51:32.911334 ignition[933]: INFO : Ignition finished successfully Sep 4 23:51:32.913918 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:51:32.922125 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:51:33.057211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:51:33.066215 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:51:33.074705 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (946) Sep 4 23:51:33.074730 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:51:33.074741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:51:33.075541 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:51:33.078993 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:51:33.080102 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:51:33.115403 ignition[963]: INFO : Ignition 2.20.0 Sep 4 23:51:33.115403 ignition[963]: INFO : Stage: files Sep 4 23:51:33.117195 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:33.117195 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:33.117195 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:51:33.120779 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:51:33.120779 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:51:33.123502 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:51:33.123502 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:51:33.123502 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:51:33.123502 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:51:33.123502 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:51:33.121638 unknown[963]: wrote ssh authorized keys file for user: core Sep 4 23:51:33.232425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:51:33.407821 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:51:33.409847 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:51:33.409847 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:51:33.650849 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:51:33.692196 systemd-networkd[787]: eth0: Gained IPv6LL Sep 4 23:51:34.099535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:51:34.099535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:51:34.102961 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:51:34.102961 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:51:34.106236 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:51:34.106236 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:51:34.109465 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:51:34.111074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:51:34.112719 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:51:34.114620 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:51:34.116379 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:51:34.118108 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:51:34.120553 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:51:34.122855 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:51:34.124859 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:51:34.509939 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:51:35.353689 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:51:35.353689 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:51:35.357248 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:51:35.359497 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:51:35.359497 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:51:35.362420 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 23:51:35.362420 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:51:35.365405 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:51:35.365405 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 23:51:35.365405 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 23:51:35.570912 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:51:35.576258 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:51:35.578011 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 23:51:35.578011 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:51:35.580690 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:51:35.582128 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:51:35.583830 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:51:35.585436 ignition[963]: INFO : files: files passed Sep 4 23:51:35.586166 ignition[963]: INFO : Ignition finished successfully Sep 4 23:51:35.589908 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:51:35.597245 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:51:35.599451 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:51:35.605663 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:51:35.605790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:51:35.609762 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 23:51:35.612883 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:51:35.612883 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:51:35.615890 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:51:35.618944 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:51:35.619297 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:51:35.627240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:51:35.650610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:51:35.650739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:51:35.653057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:51:35.655096 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:51:35.655212 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:51:35.656073 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:51:35.675360 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:51:35.687111 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:51:35.696751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:51:35.698025 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:51:35.700222 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:51:35.702289 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:51:35.702414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:51:35.704662 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:51:35.706161 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:51:35.708139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:51:35.710100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:51:35.712061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:51:35.714151 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:51:35.716217 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:51:35.718409 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:51:35.720332 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:51:35.722441 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:51:35.724175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:51:35.724313 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:51:35.726690 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:51:35.727857 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:51:35.729825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:51:35.729927 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:51:35.731967 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:51:35.732120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:51:35.734353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:51:35.734478 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:51:35.736207 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:51:35.737853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:51:35.742036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:51:35.743335 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:51:35.745221 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:51:35.746988 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:51:35.747086 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:51:35.748938 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:51:35.749042 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:51:35.751292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:51:35.751414 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:51:35.753342 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:51:35.753466 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:51:35.764116 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:51:35.765068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:51:35.765223 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:51:35.768187 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:51:35.769075 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:51:35.769192 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:51:35.771523 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:51:35.771667 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:51:35.778365 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:51:35.779018 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:51:35.784303 ignition[1018]: INFO : Ignition 2.20.0 Sep 4 23:51:35.784303 ignition[1018]: INFO : Stage: umount Sep 4 23:51:35.785917 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:51:35.785917 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:51:35.788699 ignition[1018]: INFO : umount: umount passed Sep 4 23:51:35.789527 ignition[1018]: INFO : Ignition finished successfully Sep 4 23:51:35.792357 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:51:35.792508 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:51:35.795409 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:51:35.795870 systemd[1]: Stopped target network.target - Network. Sep 4 23:51:35.796429 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:51:35.796483 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:51:35.798005 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:51:35.798054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:51:35.799729 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:51:35.799776 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:51:35.801595 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:51:35.801641 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:51:35.803510 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:51:35.805368 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:51:35.811754 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:51:35.811902 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:51:35.816619 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:51:35.817014 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:51:35.817149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:51:35.820640 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:51:35.821354 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:51:35.821442 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:51:35.830128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:51:35.831083 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:51:35.831141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:51:35.833375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:51:35.833435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:51:35.835466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:51:35.835515 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:51:35.837813 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:51:35.837861 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:51:35.839153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:51:35.841881 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:51:35.841949 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:51:35.850839 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:51:35.851006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:51:35.859754 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:51:35.859938 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:51:35.862123 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:51:35.862172 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:51:35.864147 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:51:35.864187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:51:35.866119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:51:35.866168 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:51:35.868204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:51:35.868254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:51:35.870135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:51:35.870188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:51:35.887094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:51:35.888150 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:51:35.888205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:51:35.890514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:51:35.890565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:35.893520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:51:35.893583 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:51:35.893916 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:51:35.894035 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:51:35.984139 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:51:35.984298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:51:35.986326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:51:35.988001 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:51:35.988059 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:51:35.999117 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:51:36.007364 systemd[1]: Switching root. Sep 4 23:51:36.046885 systemd-journald[194]: Journal stopped Sep 4 23:51:37.271692 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 4 23:51:37.271769 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:51:37.271784 kernel: SELinux: policy capability open_perms=1 Sep 4 23:51:37.271799 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:51:37.271811 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:51:37.271822 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:51:37.271835 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:51:37.271856 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:51:37.271873 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:51:37.271885 kernel: audit: type=1403 audit(1757029896.434:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:51:37.271900 systemd[1]: Successfully loaded SELinux policy in 40.301ms. Sep 4 23:51:37.271922 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.490ms. Sep 4 23:51:37.271936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:51:37.271949 systemd[1]: Detected virtualization kvm. Sep 4 23:51:37.271962 systemd[1]: Detected architecture x86-64. Sep 4 23:51:37.272010 systemd[1]: Detected first boot. Sep 4 23:51:37.272029 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:51:37.272042 zram_generator::config[1064]: No configuration found. Sep 4 23:51:37.272056 kernel: Guest personality initialized and is inactive Sep 4 23:51:37.272068 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:51:37.272083 kernel: Initialized host personality Sep 4 23:51:37.272095 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:51:37.272107 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:51:37.272121 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:51:37.272133 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:51:37.272151 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:51:37.272163 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:51:37.272178 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:51:37.272194 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:51:37.272207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:51:37.272219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:51:37.272232 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:51:37.272245 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:51:37.272258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:51:37.272270 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:51:37.272283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:51:37.272296 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:51:37.272312 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:51:37.272325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:51:37.272337 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:51:37.272358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:51:37.272370 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:51:37.272384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:51:37.272397 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:51:37.272410 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:51:37.272426 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:51:37.272439 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:51:37.272452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:51:37.272472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:51:37.272485 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:51:37.272497 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:51:37.272510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:51:37.272523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:51:37.272536 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:51:37.272551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:51:37.272564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:51:37.272577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:51:37.272590 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:51:37.272603 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:51:37.272616 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:51:37.272629 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:51:37.272642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:37.272655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:51:37.272670 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:51:37.272683 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:51:37.272696 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:51:37.272709 systemd[1]: Reached target machines.target - Containers. Sep 4 23:51:37.272722 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:51:37.272734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:51:37.272748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:51:37.272761 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:51:37.272788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:51:37.272801 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:51:37.272813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:51:37.272826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:51:37.272839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:51:37.272852 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:51:37.272865 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:51:37.272877 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:51:37.272893 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:51:37.272906 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:51:37.272919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:51:37.272931 kernel: fuse: init (API version 7.39) Sep 4 23:51:37.272944 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:51:37.272956 kernel: loop: module loaded Sep 4 23:51:37.272968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:51:37.272995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:51:37.273008 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:51:37.273024 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:51:37.273037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:51:37.273049 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:51:37.273062 systemd[1]: Stopped verity-setup.service. Sep 4 23:51:37.273082 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:37.273098 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:51:37.273110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:51:37.273123 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:51:37.273135 kernel: ACPI: bus type drm_connector registered Sep 4 23:51:37.273148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:51:37.273183 systemd-journald[1135]: Collecting audit messages is disabled. Sep 4 23:51:37.273207 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:51:37.273224 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:51:37.273239 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:51:37.273252 systemd-journald[1135]: Journal started Sep 4 23:51:37.273276 systemd-journald[1135]: Runtime Journal (/run/log/journal/4f78ea9f3dc54dd1b47693fabe015ab4) is 6M, max 48.2M, 42.2M free. Sep 4 23:51:37.024610 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:51:37.038099 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:51:37.038625 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:51:37.275496 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:51:37.276546 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:51:37.278199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:51:37.278436 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:51:37.279928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:51:37.280174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:51:37.281689 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:51:37.281909 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:51:37.283409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:51:37.283628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:51:37.285212 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:51:37.285441 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:51:37.287050 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:51:37.287270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:51:37.288806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:51:37.290265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:51:37.291936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:51:37.293616 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:51:37.309182 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:51:37.319115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:51:37.321442 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:51:37.322573 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:51:37.322606 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:51:37.324624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:51:37.326998 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:51:37.329786 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:51:37.331107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:51:37.334111 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:51:37.336445 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:51:37.338082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:51:37.343744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:51:37.346024 systemd-journald[1135]: Time spent on flushing to /var/log/journal/4f78ea9f3dc54dd1b47693fabe015ab4 is 24.438ms for 1056 entries. Sep 4 23:51:37.346024 systemd-journald[1135]: System Journal (/var/log/journal/4f78ea9f3dc54dd1b47693fabe015ab4) is 8M, max 195.6M, 187.6M free. Sep 4 23:51:37.378189 systemd-journald[1135]: Received client request to flush runtime journal. Sep 4 23:51:37.344888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:51:37.345987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:51:37.350579 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:51:37.354260 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:51:37.359922 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:51:37.361211 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:51:37.362732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:51:37.375676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:51:37.386318 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:51:37.388619 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:51:37.392442 kernel: loop0: detected capacity change from 0 to 224512 Sep 4 23:51:37.392365 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:51:37.399071 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:51:37.408565 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:51:37.410844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:51:37.415725 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:51:37.420596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:51:37.433244 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:51:37.445302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:51:37.447115 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:51:37.449112 kernel: loop1: detected capacity change from 0 to 138176 Sep 4 23:51:37.465861 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 4 23:51:37.465879 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 4 23:51:37.472519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:51:37.491152 kernel: loop2: detected capacity change from 0 to 147912 Sep 4 23:51:37.535018 kernel: loop3: detected capacity change from 0 to 224512 Sep 4 23:51:37.547048 kernel: loop4: detected capacity change from 0 to 138176 Sep 4 23:51:37.561016 kernel: loop5: detected capacity change from 0 to 147912 Sep 4 23:51:37.576308 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 23:51:37.576968 (sd-merge)[1211]: Merged extensions into '/usr'. Sep 4 23:51:37.582040 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:51:37.582067 systemd[1]: Reloading... Sep 4 23:51:37.657013 zram_generator::config[1245]: No configuration found. Sep 4 23:51:37.706790 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:51:37.791311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:51:37.857110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:51:37.857351 systemd[1]: Reloading finished in 274 ms. Sep 4 23:51:37.876461 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:51:37.878153 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:51:37.897608 systemd[1]: Starting ensure-sysext.service... Sep 4 23:51:37.899801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:51:37.925023 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:51:37.925314 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:51:37.926343 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:51:37.926624 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 4 23:51:37.926710 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Sep 4 23:51:37.929602 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:51:37.929619 systemd[1]: Reloading... Sep 4 23:51:37.931030 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:51:37.931042 systemd-tmpfiles[1277]: Skipping /boot Sep 4 23:51:37.944852 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:51:37.944870 systemd-tmpfiles[1277]: Skipping /boot Sep 4 23:51:37.993062 zram_generator::config[1307]: No configuration found. Sep 4 23:51:38.107562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:51:38.175457 systemd[1]: Reloading finished in 245 ms. Sep 4 23:51:38.192180 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:51:38.212460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:51:38.233348 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:51:38.236533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:51:38.239744 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:51:38.244258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:51:38.248297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:51:38.254249 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:51:38.258647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.258824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:51:38.261363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:51:38.272294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:51:38.276259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:51:38.279614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:51:38.279738 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:51:38.281846 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:51:38.282873 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.284617 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:51:38.286689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:51:38.286931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:51:38.288757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:51:38.289508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:51:38.292810 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:51:38.293438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:51:38.304246 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Sep 4 23:51:38.306157 augenrules[1375]: No rules Sep 4 23:51:38.307950 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:51:38.308354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:51:38.312306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.312628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:51:38.322816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:51:38.326119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:51:38.331260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:51:38.332737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:51:38.332863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:51:38.337216 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:51:38.340631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.342124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:51:38.343860 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:51:38.345661 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:51:38.353403 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:51:38.356169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:51:38.356423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:51:38.358638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:51:38.358884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:51:38.361366 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:51:38.361603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:51:38.377294 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:51:38.404631 systemd[1]: Finished ensure-sysext.service. Sep 4 23:51:38.406051 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1399) Sep 4 23:51:38.410430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.421301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:51:38.422600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:51:38.426138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:51:38.431274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:51:38.434197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:51:38.437867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:51:38.439198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:51:38.439248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:51:38.442936 systemd-resolved[1349]: Positive Trust Anchors: Sep 4 23:51:38.442944 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:51:38.443003 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:51:38.447087 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:51:38.451462 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:51:38.452231 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 4 23:51:38.452732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:51:38.452767 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:51:38.453650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:51:38.453887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:51:38.455218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:51:38.458376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:51:38.458609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:51:38.460378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:51:38.460600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:51:38.466435 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:51:38.466866 augenrules[1419]: /sbin/augenrules: No change Sep 4 23:51:38.466692 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:51:38.484416 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:51:38.490039 augenrules[1452]: No rules Sep 4 23:51:38.493340 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:51:38.493697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:51:38.495037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:51:38.498402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:51:38.499877 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:51:38.500038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:51:38.511027 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:51:38.531226 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 23:51:38.531558 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 23:51:38.531739 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 4 23:51:38.531950 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 23:51:38.530111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:51:38.535004 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:51:38.543171 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:51:38.571144 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:51:38.572574 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:51:38.583098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:51:38.592685 systemd-networkd[1431]: lo: Link UP Sep 4 23:51:38.592702 systemd-networkd[1431]: lo: Gained carrier Sep 4 23:51:38.594846 systemd-networkd[1431]: Enumeration completed Sep 4 23:51:38.632875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:38.634537 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:51:38.636134 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:51:38.636142 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:51:38.639146 systemd-networkd[1431]: eth0: Link UP Sep 4 23:51:38.639318 systemd-networkd[1431]: eth0: Gained carrier Sep 4 23:51:38.639452 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:51:38.649137 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:51:38.654392 systemd[1]: Reached target network.target - Network. Sep 4 23:51:38.656058 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:51:38.656845 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 4 23:51:39.456885 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 23:51:39.456928 systemd-timesyncd[1433]: Initial clock synchronization to Thu 2025-09-04 23:51:39.456797 UTC. Sep 4 23:51:39.457075 systemd-resolved[1349]: Clock change detected. Flushing caches. Sep 4 23:51:39.465824 kernel: kvm_amd: TSC scaling supported Sep 4 23:51:39.465875 kernel: kvm_amd: Nested Virtualization enabled Sep 4 23:51:39.465890 kernel: kvm_amd: Nested Paging enabled Sep 4 23:51:39.465903 kernel: kvm_amd: LBR virtualization supported Sep 4 23:51:39.465532 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:51:39.467071 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 23:51:39.467155 kernel: kvm_amd: Virtual GIF supported Sep 4 23:51:39.469087 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:51:39.472642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:51:39.472964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:39.477752 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:51:39.489588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:51:39.491306 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:51:39.498357 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:51:39.529742 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:51:39.538507 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:51:39.540557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:51:39.547506 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:51:39.587642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:51:39.589166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:51:39.590263 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:51:39.591405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:51:39.592613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:51:39.594036 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:51:39.595222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:51:39.596419 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:51:39.597605 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:51:39.597631 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:51:39.598498 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:51:39.600354 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:51:39.603095 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:51:39.606808 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:51:39.608367 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:51:39.609661 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:51:39.613591 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:51:39.615081 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:51:39.617520 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:51:39.619166 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:51:39.620339 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:51:39.621298 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:51:39.622305 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:51:39.622360 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:51:39.623380 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:51:39.625533 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:51:39.628449 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:51:39.629948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:51:39.634492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:51:39.635621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:51:39.638208 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:51:39.638997 jq[1491]: false Sep 4 23:51:39.640280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:51:39.647495 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:51:39.649971 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:51:39.654596 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:51:39.655766 extend-filesystems[1492]: Found loop3 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found loop4 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found loop5 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found sr0 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda1 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda2 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda3 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found usr Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda4 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda6 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda7 Sep 4 23:51:39.657592 extend-filesystems[1492]: Found vda9 Sep 4 23:51:39.657592 extend-filesystems[1492]: Checking size of /dev/vda9 Sep 4 23:51:39.658285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:51:39.658753 dbus-daemon[1490]: [system] SELinux support is enabled Sep 4 23:51:39.671571 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:51:39.671750 extend-filesystems[1492]: Resized partition /dev/vda9 Sep 4 23:51:39.674054 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:51:39.677245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 23:51:39.673626 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:51:39.677465 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:51:39.680157 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:51:39.685852 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:51:39.686516 jq[1510]: true Sep 4 23:51:39.689536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:51:39.690096 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:51:39.690631 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:51:39.690891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:51:39.702125 update_engine[1509]: I20250904 23:51:39.702028 1509 main.cc:92] Flatcar Update Engine starting Sep 4 23:51:39.704742 (ntainerd)[1516]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:51:39.708370 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 23:51:39.712047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:51:39.712590 update_engine[1509]: I20250904 23:51:39.712384 1509 update_check_scheduler.cc:74] Next update check in 3m30s Sep 4 23:51:39.733515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1408) Sep 4 23:51:39.712361 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:51:39.722024 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:51:39.733727 jq[1515]: true Sep 4 23:51:39.723636 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:51:39.737538 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:51:39.737538 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:51:39.737538 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 23:51:39.723671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:51:39.744084 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Sep 4 23:51:39.725067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:51:39.725083 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:51:39.736598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:51:39.740031 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:51:39.740364 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:51:39.746598 systemd-logind[1499]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 23:51:39.746630 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:51:39.747101 systemd-logind[1499]: New seat seat0. Sep 4 23:51:39.748911 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:51:39.765450 tar[1514]: linux-amd64/LICENSE Sep 4 23:51:39.765713 tar[1514]: linux-amd64/helm Sep 4 23:51:39.806582 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:51:39.822594 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:51:39.824762 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:51:39.827132 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:51:39.847908 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:51:39.872414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:51:39.881615 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:51:39.889474 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:51:39.889829 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:51:39.900556 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:51:40.082850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:51:40.093737 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:51:40.096677 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:51:40.098281 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:51:40.122309 containerd[1516]: time="2025-09-04T23:51:40.122182970Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:51:40.212465 containerd[1516]: time="2025-09-04T23:51:40.212398747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.214692 containerd[1516]: time="2025-09-04T23:51:40.214651552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:51:40.214745 containerd[1516]: time="2025-09-04T23:51:40.214705733Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:51:40.214745 containerd[1516]: time="2025-09-04T23:51:40.214727805Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:51:40.215000 containerd[1516]: time="2025-09-04T23:51:40.214961192Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:51:40.215000 containerd[1516]: time="2025-09-04T23:51:40.214990878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215101 containerd[1516]: time="2025-09-04T23:51:40.215084564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215138 containerd[1516]: time="2025-09-04T23:51:40.215103930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215457 containerd[1516]: time="2025-09-04T23:51:40.215430282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215457 containerd[1516]: time="2025-09-04T23:51:40.215452414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215526 containerd[1516]: time="2025-09-04T23:51:40.215469395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215526 containerd[1516]: time="2025-09-04T23:51:40.215481839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215632 containerd[1516]: time="2025-09-04T23:51:40.215608356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.215961 containerd[1516]: time="2025-09-04T23:51:40.215924148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:51:40.216191 containerd[1516]: time="2025-09-04T23:51:40.216142848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:51:40.216191 containerd[1516]: time="2025-09-04T23:51:40.216162926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:51:40.216346 containerd[1516]: time="2025-09-04T23:51:40.216307347Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:51:40.216437 containerd[1516]: time="2025-09-04T23:51:40.216414067Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:51:40.222610 containerd[1516]: time="2025-09-04T23:51:40.222561405Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:51:40.222610 containerd[1516]: time="2025-09-04T23:51:40.222606290Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:51:40.222610 containerd[1516]: time="2025-09-04T23:51:40.222622901Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:51:40.222814 containerd[1516]: time="2025-09-04T23:51:40.222646345Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:51:40.222814 containerd[1516]: time="2025-09-04T23:51:40.222663457Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:51:40.222814 containerd[1516]: time="2025-09-04T23:51:40.222807046Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:51:40.223055 containerd[1516]: time="2025-09-04T23:51:40.223024634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:51:40.223183 containerd[1516]: time="2025-09-04T23:51:40.223154708Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:51:40.223183 containerd[1516]: time="2025-09-04T23:51:40.223175227Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:51:40.223230 containerd[1516]: time="2025-09-04T23:51:40.223192799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:51:40.223230 containerd[1516]: time="2025-09-04T23:51:40.223205563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223230 containerd[1516]: time="2025-09-04T23:51:40.223223557Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223290 containerd[1516]: time="2025-09-04T23:51:40.223236201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223290 containerd[1516]: time="2025-09-04T23:51:40.223251069Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223290 containerd[1516]: time="2025-09-04T23:51:40.223266898Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223290 containerd[1516]: time="2025-09-04T23:51:40.223279803Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223293238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223305922Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223324496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223354172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223366345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223378187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223390 containerd[1516]: time="2025-09-04T23:51:40.223392414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223528 containerd[1516]: time="2025-09-04T23:51:40.223407482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223528 containerd[1516]: time="2025-09-04T23:51:40.223420126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223528 containerd[1516]: time="2025-09-04T23:51:40.223434673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223528 containerd[1516]: time="2025-09-04T23:51:40.223447447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223603 containerd[1516]: time="2025-09-04T23:51:40.223567693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223603 containerd[1516]: time="2025-09-04T23:51:40.223588852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223650 containerd[1516]: time="2025-09-04T23:51:40.223605493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223749 containerd[1516]: time="2025-09-04T23:51:40.223725759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223773 containerd[1516]: time="2025-09-04T23:51:40.223760905Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:51:40.223858 containerd[1516]: time="2025-09-04T23:51:40.223838110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223881 containerd[1516]: time="2025-09-04T23:51:40.223861113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.223881 containerd[1516]: time="2025-09-04T23:51:40.223875860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:51:40.223953 containerd[1516]: time="2025-09-04T23:51:40.223930313Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224220747Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224260261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224284126Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224298954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224351042Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224375307Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:51:40.224521 containerd[1516]: time="2025-09-04T23:51:40.224390666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:51:40.225140 containerd[1516]: time="2025-09-04T23:51:40.225086451Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:51:40.225930 containerd[1516]: time="2025-09-04T23:51:40.225716683Z" level=info msg="Connect containerd service" Sep 4 23:51:40.225930 containerd[1516]: time="2025-09-04T23:51:40.225779851Z" level=info msg="using legacy CRI server" Sep 4 23:51:40.225930 containerd[1516]: time="2025-09-04T23:51:40.225798666Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:51:40.226467 containerd[1516]: time="2025-09-04T23:51:40.226444548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:51:40.227212 containerd[1516]: time="2025-09-04T23:51:40.227173815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:51:40.227553 containerd[1516]: time="2025-09-04T23:51:40.227426549Z" level=info msg="Start subscribing containerd event" Sep 4 23:51:40.227553 containerd[1516]: time="2025-09-04T23:51:40.227496430Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:51:40.227553 containerd[1516]: time="2025-09-04T23:51:40.227513021Z" level=info msg="Start recovering state" Sep 4 23:51:40.227553 containerd[1516]: time="2025-09-04T23:51:40.227549440Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:51:40.227898 containerd[1516]: time="2025-09-04T23:51:40.227800000Z" level=info msg="Start event monitor" Sep 4 23:51:40.227898 containerd[1516]: time="2025-09-04T23:51:40.227836268Z" level=info msg="Start snapshots syncer" Sep 4 23:51:40.227898 containerd[1516]: time="2025-09-04T23:51:40.227849693Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:51:40.227898 containerd[1516]: time="2025-09-04T23:51:40.227858149Z" level=info msg="Start streaming server" Sep 4 23:51:40.228418 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:51:40.228542 containerd[1516]: time="2025-09-04T23:51:40.228450910Z" level=info msg="containerd successfully booted in 0.107971s" Sep 4 23:51:40.473815 tar[1514]: linux-amd64/README.md Sep 4 23:51:40.490074 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:51:40.827584 systemd-networkd[1431]: eth0: Gained IPv6LL Sep 4 23:51:40.831638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:51:40.833471 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:51:40.854786 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 23:51:40.857824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:51:40.860399 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:51:40.879490 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 23:51:40.880005 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 23:51:40.882133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:51:40.887505 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:51:42.630013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:51:42.632250 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:51:42.634889 systemd[1]: Startup finished in 774ms (kernel) + 6.728s (initrd) + 5.439s (userspace) = 12.942s. Sep 4 23:51:42.637208 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:51:43.328662 kubelet[1603]: E0904 23:51:43.328572 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:51:43.333323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:51:43.333568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:51:43.333969 systemd[1]: kubelet.service: Consumed 2.370s CPU time, 265.9M memory peak. Sep 4 23:51:43.889804 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:51:43.891068 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:58062.service - OpenSSH per-connection server daemon (10.0.0.1:58062). Sep 4 23:51:43.939348 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:43.941074 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:43.952096 systemd-logind[1499]: New session 1 of user core. Sep 4 23:51:43.953506 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:51:43.964584 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:51:43.975369 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:51:43.988612 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:51:43.991515 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:51:43.993847 systemd-logind[1499]: New session c1 of user core. Sep 4 23:51:44.127443 systemd[1620]: Queued start job for default target default.target. Sep 4 23:51:44.142665 systemd[1620]: Created slice app.slice - User Application Slice. Sep 4 23:51:44.142692 systemd[1620]: Reached target paths.target - Paths. Sep 4 23:51:44.142734 systemd[1620]: Reached target timers.target - Timers. Sep 4 23:51:44.144273 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:51:44.155876 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:51:44.155999 systemd[1620]: Reached target sockets.target - Sockets. Sep 4 23:51:44.156042 systemd[1620]: Reached target basic.target - Basic System. Sep 4 23:51:44.156087 systemd[1620]: Reached target default.target - Main User Target. Sep 4 23:51:44.156118 systemd[1620]: Startup finished in 155ms. Sep 4 23:51:44.156654 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:51:44.165458 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:51:44.231470 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Sep 4 23:51:44.272713 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:44.274162 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:44.278200 systemd-logind[1499]: New session 2 of user core. Sep 4 23:51:44.284458 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:51:44.336582 sshd[1633]: Connection closed by 10.0.0.1 port 58076 Sep 4 23:51:44.337000 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:44.349207 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:58076.service: Deactivated successfully. Sep 4 23:51:44.351206 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:51:44.352710 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:51:44.366604 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:58086.service - OpenSSH per-connection server daemon (10.0.0.1:58086). Sep 4 23:51:44.367593 systemd-logind[1499]: Removed session 2. Sep 4 23:51:44.400411 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 58086 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:44.402163 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:44.406484 systemd-logind[1499]: New session 3 of user core. Sep 4 23:51:44.415475 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:51:44.464654 sshd[1641]: Connection closed by 10.0.0.1 port 58086 Sep 4 23:51:44.465078 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:44.482368 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:58086.service: Deactivated successfully. Sep 4 23:51:44.484470 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:51:44.486094 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:51:44.487540 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:58088.service - OpenSSH per-connection server daemon (10.0.0.1:58088). Sep 4 23:51:44.488259 systemd-logind[1499]: Removed session 3. Sep 4 23:51:44.538938 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 58088 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:44.540424 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:44.545131 systemd-logind[1499]: New session 4 of user core. Sep 4 23:51:44.556468 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:51:44.610007 sshd[1649]: Connection closed by 10.0.0.1 port 58088 Sep 4 23:51:44.610387 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:44.618196 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:58088.service: Deactivated successfully. Sep 4 23:51:44.620221 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:51:44.621945 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:51:44.626668 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:58090.service - OpenSSH per-connection server daemon (10.0.0.1:58090). Sep 4 23:51:44.627593 systemd-logind[1499]: Removed session 4. Sep 4 23:51:44.662394 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 58090 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:44.663817 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:44.668207 systemd-logind[1499]: New session 5 of user core. Sep 4 23:51:44.687470 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:51:44.747631 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:51:44.747984 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:51:44.763969 sudo[1658]: pam_unix(sudo:session): session closed for user root Sep 4 23:51:44.765559 sshd[1657]: Connection closed by 10.0.0.1 port 58090 Sep 4 23:51:44.765998 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:44.778188 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:58090.service: Deactivated successfully. Sep 4 23:51:44.780086 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:51:44.781919 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:51:44.805609 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). Sep 4 23:51:44.806736 systemd-logind[1499]: Removed session 5. Sep 4 23:51:44.839698 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:44.841622 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:44.846686 systemd-logind[1499]: New session 6 of user core. Sep 4 23:51:44.860582 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:51:44.917495 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:51:44.917847 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:51:44.921905 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 4 23:51:44.928747 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:51:44.929191 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:51:44.948606 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:51:44.980641 augenrules[1690]: No rules Sep 4 23:51:44.982250 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:51:44.982554 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:51:44.983795 sudo[1667]: pam_unix(sudo:session): session closed for user root Sep 4 23:51:44.985314 sshd[1666]: Connection closed by 10.0.0.1 port 58102 Sep 4 23:51:44.985689 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:44.994348 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:58102.service: Deactivated successfully. Sep 4 23:51:44.996472 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:51:44.998052 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:51:45.007596 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:58114.service - OpenSSH per-connection server daemon (10.0.0.1:58114). Sep 4 23:51:45.008495 systemd-logind[1499]: Removed session 6. Sep 4 23:51:45.042385 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 58114 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:51:45.044014 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:45.048505 systemd-logind[1499]: New session 7 of user core. Sep 4 23:51:45.061473 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:51:45.116402 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:51:45.116748 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:51:46.210551 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:51:46.210719 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:51:46.891650 dockerd[1721]: time="2025-09-04T23:51:46.891560048Z" level=info msg="Starting up" Sep 4 23:51:47.356583 dockerd[1721]: time="2025-09-04T23:51:47.356516397Z" level=info msg="Loading containers: start." Sep 4 23:51:47.550359 kernel: Initializing XFRM netlink socket Sep 4 23:51:47.639674 systemd-networkd[1431]: docker0: Link UP Sep 4 23:51:47.681069 dockerd[1721]: time="2025-09-04T23:51:47.681004579Z" level=info msg="Loading containers: done." Sep 4 23:51:47.698651 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2947887341-merged.mount: Deactivated successfully. Sep 4 23:51:47.701776 dockerd[1721]: time="2025-09-04T23:51:47.701720038Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:51:47.701887 dockerd[1721]: time="2025-09-04T23:51:47.701862264Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:51:47.702071 dockerd[1721]: time="2025-09-04T23:51:47.702040729Z" level=info msg="Daemon has completed initialization" Sep 4 23:51:47.742296 dockerd[1721]: time="2025-09-04T23:51:47.742209580Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:51:47.742444 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:51:48.746534 containerd[1516]: time="2025-09-04T23:51:48.746480997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:51:49.460918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066797727.mount: Deactivated successfully. Sep 4 23:51:50.807665 containerd[1516]: time="2025-09-04T23:51:50.807597476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:50.808280 containerd[1516]: time="2025-09-04T23:51:50.808202852Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 23:51:50.809414 containerd[1516]: time="2025-09-04T23:51:50.809380240Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:50.813440 containerd[1516]: time="2025-09-04T23:51:50.813389639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:50.814654 containerd[1516]: time="2025-09-04T23:51:50.814612252Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.068086972s" Sep 4 23:51:50.814654 containerd[1516]: time="2025-09-04T23:51:50.814652998Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:51:50.815532 containerd[1516]: time="2025-09-04T23:51:50.815502631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:51:52.142619 containerd[1516]: time="2025-09-04T23:51:52.142550971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:52.143435 containerd[1516]: time="2025-09-04T23:51:52.143368754Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 23:51:52.144555 containerd[1516]: time="2025-09-04T23:51:52.144524572Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:52.147688 containerd[1516]: time="2025-09-04T23:51:52.147642850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:52.148591 containerd[1516]: time="2025-09-04T23:51:52.148559218Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.333024025s" Sep 4 23:51:52.148591 containerd[1516]: time="2025-09-04T23:51:52.148589555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:51:52.149150 containerd[1516]: time="2025-09-04T23:51:52.149115742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:51:53.526429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:51:53.540491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:51:53.780555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:51:53.784605 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:51:53.936595 kubelet[1990]: E0904 23:51:53.936480 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:51:53.943397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:51:53.943638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:51:53.944036 systemd[1]: kubelet.service: Consumed 288ms CPU time, 110.9M memory peak. Sep 4 23:51:54.106914 containerd[1516]: time="2025-09-04T23:51:54.106778869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:54.107870 containerd[1516]: time="2025-09-04T23:51:54.107826053Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 23:51:54.109309 containerd[1516]: time="2025-09-04T23:51:54.109274890Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:54.112616 containerd[1516]: time="2025-09-04T23:51:54.112579057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:54.113938 containerd[1516]: time="2025-09-04T23:51:54.113906526Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.964762481s" Sep 4 23:51:54.113938 containerd[1516]: time="2025-09-04T23:51:54.113932365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:51:54.114468 containerd[1516]: time="2025-09-04T23:51:54.114406303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:51:55.196099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187230804.mount: Deactivated successfully. Sep 4 23:51:55.803275 containerd[1516]: time="2025-09-04T23:51:55.803190416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:55.805377 containerd[1516]: time="2025-09-04T23:51:55.803964478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 23:51:55.807437 containerd[1516]: time="2025-09-04T23:51:55.805951695Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:55.809509 containerd[1516]: time="2025-09-04T23:51:55.809450737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:55.810051 containerd[1516]: time="2025-09-04T23:51:55.810011699Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.695575039s" Sep 4 23:51:55.810051 containerd[1516]: time="2025-09-04T23:51:55.810044691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:51:55.810612 containerd[1516]: time="2025-09-04T23:51:55.810572681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:51:56.415548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576572760.mount: Deactivated successfully. Sep 4 23:51:58.002277 containerd[1516]: time="2025-09-04T23:51:58.002198672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.003045 containerd[1516]: time="2025-09-04T23:51:58.002963596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 23:51:58.004365 containerd[1516]: time="2025-09-04T23:51:58.004305072Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.007477 containerd[1516]: time="2025-09-04T23:51:58.007430754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.008753 containerd[1516]: time="2025-09-04T23:51:58.008720593Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.198117455s" Sep 4 23:51:58.008753 containerd[1516]: time="2025-09-04T23:51:58.008752823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:51:58.009937 containerd[1516]: time="2025-09-04T23:51:58.009913179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:51:58.505326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726112648.mount: Deactivated successfully. Sep 4 23:51:58.511375 containerd[1516]: time="2025-09-04T23:51:58.511318463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.512067 containerd[1516]: time="2025-09-04T23:51:58.512021722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:51:58.513255 containerd[1516]: time="2025-09-04T23:51:58.513219759Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.515638 containerd[1516]: time="2025-09-04T23:51:58.515600894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:58.516370 containerd[1516]: time="2025-09-04T23:51:58.516311256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.370796ms" Sep 4 23:51:58.516370 containerd[1516]: time="2025-09-04T23:51:58.516365568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:51:58.516940 containerd[1516]: time="2025-09-04T23:51:58.516904699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:51:59.077302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2879803912.mount: Deactivated successfully. Sep 4 23:52:00.880810 containerd[1516]: time="2025-09-04T23:52:00.880737640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:00.881632 containerd[1516]: time="2025-09-04T23:52:00.881543812Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 23:52:00.882983 containerd[1516]: time="2025-09-04T23:52:00.882928599Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:00.886084 containerd[1516]: time="2025-09-04T23:52:00.886046055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:00.887278 containerd[1516]: time="2025-09-04T23:52:00.887242288Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.370304837s" Sep 4 23:52:00.887278 containerd[1516]: time="2025-09-04T23:52:00.887277565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:52:03.330956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:03.331158 systemd[1]: kubelet.service: Consumed 288ms CPU time, 110.9M memory peak. Sep 4 23:52:03.339564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:52:03.369058 systemd[1]: Reload requested from client PID 2146 ('systemctl') (unit session-7.scope)... Sep 4 23:52:03.369073 systemd[1]: Reloading... Sep 4 23:52:03.524373 zram_generator::config[2194]: No configuration found. Sep 4 23:52:04.053118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:52:04.166668 systemd[1]: Reloading finished in 797 ms. Sep 4 23:52:04.223700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:04.228450 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:52:04.231943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:52:04.233396 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:52:04.233693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:04.233751 systemd[1]: kubelet.service: Consumed 209ms CPU time, 99.4M memory peak. Sep 4 23:52:04.237636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:52:04.409640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:04.415557 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:52:04.626695 kubelet[2241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:52:04.626695 kubelet[2241]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:52:04.626695 kubelet[2241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:52:04.627122 kubelet[2241]: I0904 23:52:04.626781 2241 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:52:04.806103 kubelet[2241]: I0904 23:52:04.806051 2241 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:52:04.806103 kubelet[2241]: I0904 23:52:04.806084 2241 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:52:04.806856 kubelet[2241]: I0904 23:52:04.806593 2241 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:52:04.830752 kubelet[2241]: I0904 23:52:04.830710 2241 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:52:04.831541 kubelet[2241]: E0904 23:52:04.831517 2241 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:04.836041 kubelet[2241]: E0904 23:52:04.836019 2241 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:52:04.836041 kubelet[2241]: I0904 23:52:04.836040 2241 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:52:04.841290 kubelet[2241]: I0904 23:52:04.841270 2241 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:52:04.843371 kubelet[2241]: I0904 23:52:04.843306 2241 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:52:04.843533 kubelet[2241]: I0904 23:52:04.843357 2241 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:52:04.843643 kubelet[2241]: I0904 23:52:04.843544 2241 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:52:04.843643 kubelet[2241]: I0904 23:52:04.843556 2241 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:52:04.843704 kubelet[2241]: I0904 23:52:04.843698 2241 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:52:04.846599 kubelet[2241]: I0904 23:52:04.846570 2241 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:52:04.846599 kubelet[2241]: I0904 23:52:04.846601 2241 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:52:04.846685 kubelet[2241]: I0904 23:52:04.846626 2241 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:52:04.846685 kubelet[2241]: I0904 23:52:04.846656 2241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:52:04.850279 kubelet[2241]: W0904 23:52:04.849612 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:04.850279 kubelet[2241]: E0904 23:52:04.849667 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:04.850279 kubelet[2241]: W0904 23:52:04.850173 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:04.850279 kubelet[2241]: E0904 23:52:04.850234 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:04.852101 kubelet[2241]: I0904 23:52:04.850966 2241 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:52:04.852101 kubelet[2241]: I0904 23:52:04.851354 2241 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:52:04.865976 kubelet[2241]: W0904 23:52:04.865652 2241 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:52:04.869023 kubelet[2241]: I0904 23:52:04.868990 2241 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:52:04.869088 kubelet[2241]: I0904 23:52:04.869034 2241 server.go:1287] "Started kubelet" Sep 4 23:52:04.869165 kubelet[2241]: I0904 23:52:04.869135 2241 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:52:04.871931 kubelet[2241]: I0904 23:52:04.870260 2241 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:52:04.872017 kubelet[2241]: I0904 23:52:04.871998 2241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:52:04.873289 kubelet[2241]: I0904 23:52:04.872692 2241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:52:04.873289 kubelet[2241]: I0904 23:52:04.872997 2241 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:52:04.876502 kubelet[2241]: I0904 23:52:04.876459 2241 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:52:04.877407 kubelet[2241]: E0904 23:52:04.877295 2241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:52:04.877407 kubelet[2241]: I0904 23:52:04.877328 2241 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:52:04.877536 kubelet[2241]: I0904 23:52:04.877476 2241 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:52:04.877562 kubelet[2241]: I0904 23:52:04.877550 2241 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:52:04.878236 kubelet[2241]: W0904 23:52:04.877792 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:04.878236 kubelet[2241]: E0904 23:52:04.877835 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:04.878236 kubelet[2241]: E0904 23:52:04.878088 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Sep 4 23:52:04.878236 kubelet[2241]: E0904 23:52:04.875195 2241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623968538c6a58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:52:04.86900796 +0000 UTC m=+0.448138771,LastTimestamp:2025-09-04 23:52:04.86900796 +0000 UTC m=+0.448138771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:52:04.878904 kubelet[2241]: I0904 23:52:04.878876 2241 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:52:04.879137 kubelet[2241]: I0904 23:52:04.879105 2241 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:52:04.880149 kubelet[2241]: E0904 23:52:04.880120 2241 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:52:04.880270 kubelet[2241]: I0904 23:52:04.880250 2241 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:52:04.893734 kubelet[2241]: I0904 23:52:04.893601 2241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:52:04.895670 kubelet[2241]: I0904 23:52:04.894876 2241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:52:04.895670 kubelet[2241]: I0904 23:52:04.894904 2241 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:52:04.895670 kubelet[2241]: I0904 23:52:04.894931 2241 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:52:04.895670 kubelet[2241]: I0904 23:52:04.894941 2241 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:52:04.895670 kubelet[2241]: E0904 23:52:04.894988 2241 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:52:04.896374 kubelet[2241]: W0904 23:52:04.896298 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:04.896582 kubelet[2241]: E0904 23:52:04.896479 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:04.897531 kubelet[2241]: I0904 23:52:04.897496 2241 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:52:04.897531 kubelet[2241]: I0904 23:52:04.897514 2241 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:52:04.897531 kubelet[2241]: I0904 23:52:04.897535 2241 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:52:04.977468 kubelet[2241]: E0904 23:52:04.977437 2241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:52:04.995741 kubelet[2241]: E0904 23:52:04.995692 2241 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:52:05.078011 kubelet[2241]: E0904 23:52:05.077936 2241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:52:05.079473 kubelet[2241]: E0904 23:52:05.079442 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Sep 4 23:52:05.155487 kubelet[2241]: I0904 23:52:05.155457 2241 policy_none.go:49] "None policy: Start" Sep 4 23:52:05.155487 kubelet[2241]: I0904 23:52:05.155487 2241 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:52:05.155626 kubelet[2241]: I0904 23:52:05.155503 2241 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:52:05.164490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:52:05.178687 kubelet[2241]: E0904 23:52:05.178648 2241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:52:05.179564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:52:05.182645 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:52:05.196677 kubelet[2241]: E0904 23:52:05.196647 2241 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:52:05.200227 kubelet[2241]: I0904 23:52:05.200199 2241 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:52:05.200477 kubelet[2241]: I0904 23:52:05.200465 2241 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:52:05.200539 kubelet[2241]: I0904 23:52:05.200481 2241 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:52:05.200736 kubelet[2241]: I0904 23:52:05.200721 2241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:52:05.203579 kubelet[2241]: E0904 23:52:05.203561 2241 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:52:05.203627 kubelet[2241]: E0904 23:52:05.203617 2241 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:52:05.302387 kubelet[2241]: I0904 23:52:05.302310 2241 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:05.302702 kubelet[2241]: E0904 23:52:05.302661 2241 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 4 23:52:05.480301 kubelet[2241]: E0904 23:52:05.480198 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Sep 4 23:52:05.504242 kubelet[2241]: I0904 23:52:05.504219 2241 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:05.504479 kubelet[2241]: E0904 23:52:05.504454 2241 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 4 23:52:05.604449 systemd[1]: Created slice kubepods-burstable-podc05d83c18a5f502d42a340085eb63104.slice - libcontainer container kubepods-burstable-podc05d83c18a5f502d42a340085eb63104.slice. Sep 4 23:52:05.615194 kubelet[2241]: E0904 23:52:05.615160 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:05.617462 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 23:52:05.619113 kubelet[2241]: E0904 23:52:05.619049 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:05.632014 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 23:52:05.633796 kubelet[2241]: E0904 23:52:05.633759 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:05.682131 kubelet[2241]: I0904 23:52:05.682099 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:05.682193 kubelet[2241]: I0904 23:52:05.682128 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:05.682193 kubelet[2241]: I0904 23:52:05.682154 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:05.682193 kubelet[2241]: I0904 23:52:05.682174 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:05.682257 kubelet[2241]: I0904 23:52:05.682195 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:05.682292 kubelet[2241]: I0904 23:52:05.682266 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:05.682322 kubelet[2241]: I0904 23:52:05.682303 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:05.682380 kubelet[2241]: I0904 23:52:05.682349 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:05.682409 kubelet[2241]: I0904 23:52:05.682386 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:05.841118 kubelet[2241]: W0904 23:52:05.841069 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:05.841176 kubelet[2241]: E0904 23:52:05.841128 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:05.905692 kubelet[2241]: I0904 23:52:05.905671 2241 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:05.906039 kubelet[2241]: E0904 23:52:05.906002 2241 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 4 23:52:05.916257 kubelet[2241]: E0904 23:52:05.916240 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:05.916783 containerd[1516]: time="2025-09-04T23:52:05.916751754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c05d83c18a5f502d42a340085eb63104,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:05.920018 kubelet[2241]: E0904 23:52:05.919993 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:05.920381 containerd[1516]: time="2025-09-04T23:52:05.920328992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:05.934635 kubelet[2241]: E0904 23:52:05.934599 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:05.934917 containerd[1516]: time="2025-09-04T23:52:05.934875803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:06.086249 kubelet[2241]: W0904 23:52:06.086148 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:06.086249 kubelet[2241]: E0904 23:52:06.086234 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:06.089872 kubelet[2241]: W0904 23:52:06.089792 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:06.089872 kubelet[2241]: E0904 23:52:06.089866 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:06.281680 kubelet[2241]: E0904 23:52:06.281631 2241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Sep 4 23:52:06.450416 kubelet[2241]: W0904 23:52:06.450326 2241 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Sep 4 23:52:06.450488 kubelet[2241]: E0904 23:52:06.450420 2241 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:06.708089 kubelet[2241]: I0904 23:52:06.707941 2241 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:06.708550 kubelet[2241]: E0904 23:52:06.708199 2241 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 4 23:52:06.851184 kubelet[2241]: E0904 23:52:06.851132 2241 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:52:06.934435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088741500.mount: Deactivated successfully. Sep 4 23:52:06.941687 containerd[1516]: time="2025-09-04T23:52:06.941642340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:52:06.945091 containerd[1516]: time="2025-09-04T23:52:06.945023931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:52:06.946135 containerd[1516]: time="2025-09-04T23:52:06.946075293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:52:06.948170 containerd[1516]: time="2025-09-04T23:52:06.948129996Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:52:06.949126 containerd[1516]: time="2025-09-04T23:52:06.949038710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:52:06.950235 containerd[1516]: time="2025-09-04T23:52:06.950184489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:52:06.951188 containerd[1516]: time="2025-09-04T23:52:06.951134350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:52:06.952393 containerd[1516]: time="2025-09-04T23:52:06.952364427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:52:06.953543 containerd[1516]: time="2025-09-04T23:52:06.953511609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.018554143s" Sep 4 23:52:06.959521 containerd[1516]: time="2025-09-04T23:52:06.959418667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.038987382s" Sep 4 23:52:06.961178 containerd[1516]: time="2025-09-04T23:52:06.960904703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.044056669s" Sep 4 23:52:07.272289 containerd[1516]: time="2025-09-04T23:52:07.272069739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:07.272289 containerd[1516]: time="2025-09-04T23:52:07.272198140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:07.272462 containerd[1516]: time="2025-09-04T23:52:07.272395560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.272879 containerd[1516]: time="2025-09-04T23:52:07.272828311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.273660 containerd[1516]: time="2025-09-04T23:52:07.273396076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:07.273660 containerd[1516]: time="2025-09-04T23:52:07.273454766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:07.273660 containerd[1516]: time="2025-09-04T23:52:07.273475335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.273660 containerd[1516]: time="2025-09-04T23:52:07.273560364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.275562 containerd[1516]: time="2025-09-04T23:52:07.271626237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:07.275562 containerd[1516]: time="2025-09-04T23:52:07.275509560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:07.275840 containerd[1516]: time="2025-09-04T23:52:07.275572568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.275957 containerd[1516]: time="2025-09-04T23:52:07.275862371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:07.326483 systemd[1]: Started cri-containerd-aa1e195fb948397b82cff7c919d2b9d833a5c6cf35fec717ea294e320e38711e.scope - libcontainer container aa1e195fb948397b82cff7c919d2b9d833a5c6cf35fec717ea294e320e38711e. Sep 4 23:52:07.330678 systemd[1]: Started cri-containerd-4511b8aeacb03907b3126d7a1d64fc002a2097761b929cd45f1986ff14a9fd53.scope - libcontainer container 4511b8aeacb03907b3126d7a1d64fc002a2097761b929cd45f1986ff14a9fd53. Sep 4 23:52:07.335387 systemd[1]: Started cri-containerd-6a409b5e3d8889d21d4d8f6257587a69f9e90e8d060e3e0d4f69d1d478a149e3.scope - libcontainer container 6a409b5e3d8889d21d4d8f6257587a69f9e90e8d060e3e0d4f69d1d478a149e3. Sep 4 23:52:07.387958 containerd[1516]: time="2025-09-04T23:52:07.387889102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa1e195fb948397b82cff7c919d2b9d833a5c6cf35fec717ea294e320e38711e\"" Sep 4 23:52:07.390178 containerd[1516]: time="2025-09-04T23:52:07.389466059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a409b5e3d8889d21d4d8f6257587a69f9e90e8d060e3e0d4f69d1d478a149e3\"" Sep 4 23:52:07.390223 kubelet[2241]: E0904 23:52:07.389712 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.390923 kubelet[2241]: E0904 23:52:07.390899 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.392356 containerd[1516]: time="2025-09-04T23:52:07.392306235Z" level=info msg="CreateContainer within sandbox \"aa1e195fb948397b82cff7c919d2b9d833a5c6cf35fec717ea294e320e38711e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:52:07.393688 containerd[1516]: time="2025-09-04T23:52:07.393665264Z" level=info msg="CreateContainer within sandbox \"6a409b5e3d8889d21d4d8f6257587a69f9e90e8d060e3e0d4f69d1d478a149e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:52:07.512477 containerd[1516]: time="2025-09-04T23:52:07.512427495Z" level=info msg="CreateContainer within sandbox \"aa1e195fb948397b82cff7c919d2b9d833a5c6cf35fec717ea294e320e38711e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74815b58aa548e4bc8e0aaff21c72f6e5c148dd720f0f05bd3e0e567452dfc20\"" Sep 4 23:52:07.513311 containerd[1516]: time="2025-09-04T23:52:07.513278632Z" level=info msg="StartContainer for \"74815b58aa548e4bc8e0aaff21c72f6e5c148dd720f0f05bd3e0e567452dfc20\"" Sep 4 23:52:07.515580 containerd[1516]: time="2025-09-04T23:52:07.515545803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c05d83c18a5f502d42a340085eb63104,Namespace:kube-system,Attempt:0,} returns sandbox id \"4511b8aeacb03907b3126d7a1d64fc002a2097761b929cd45f1986ff14a9fd53\"" Sep 4 23:52:07.516087 containerd[1516]: time="2025-09-04T23:52:07.515998062Z" level=info msg="CreateContainer within sandbox \"6a409b5e3d8889d21d4d8f6257587a69f9e90e8d060e3e0d4f69d1d478a149e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e38d5ba2394958e224017c1cb7f63bd6b5f8ffd2354926e37b91b3eba74f9b3d\"" Sep 4 23:52:07.516297 kubelet[2241]: E0904 23:52:07.516267 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.516551 containerd[1516]: time="2025-09-04T23:52:07.516529959Z" level=info msg="StartContainer for \"e38d5ba2394958e224017c1cb7f63bd6b5f8ffd2354926e37b91b3eba74f9b3d\"" Sep 4 23:52:07.517866 containerd[1516]: time="2025-09-04T23:52:07.517823315Z" level=info msg="CreateContainer within sandbox \"4511b8aeacb03907b3126d7a1d64fc002a2097761b929cd45f1986ff14a9fd53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:52:07.534699 containerd[1516]: time="2025-09-04T23:52:07.534569570Z" level=info msg="CreateContainer within sandbox \"4511b8aeacb03907b3126d7a1d64fc002a2097761b929cd45f1986ff14a9fd53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a9c4a3d778f9bee42c70e8ec673ce2522b8641091898b6ab6db7043bef6c398\"" Sep 4 23:52:07.535016 containerd[1516]: time="2025-09-04T23:52:07.534989066Z" level=info msg="StartContainer for \"8a9c4a3d778f9bee42c70e8ec673ce2522b8641091898b6ab6db7043bef6c398\"" Sep 4 23:52:07.548489 systemd[1]: Started cri-containerd-74815b58aa548e4bc8e0aaff21c72f6e5c148dd720f0f05bd3e0e567452dfc20.scope - libcontainer container 74815b58aa548e4bc8e0aaff21c72f6e5c148dd720f0f05bd3e0e567452dfc20. Sep 4 23:52:07.551577 systemd[1]: Started cri-containerd-e38d5ba2394958e224017c1cb7f63bd6b5f8ffd2354926e37b91b3eba74f9b3d.scope - libcontainer container e38d5ba2394958e224017c1cb7f63bd6b5f8ffd2354926e37b91b3eba74f9b3d. Sep 4 23:52:07.564740 systemd[1]: Started cri-containerd-8a9c4a3d778f9bee42c70e8ec673ce2522b8641091898b6ab6db7043bef6c398.scope - libcontainer container 8a9c4a3d778f9bee42c70e8ec673ce2522b8641091898b6ab6db7043bef6c398. Sep 4 23:52:07.604231 containerd[1516]: time="2025-09-04T23:52:07.604195223Z" level=info msg="StartContainer for \"e38d5ba2394958e224017c1cb7f63bd6b5f8ffd2354926e37b91b3eba74f9b3d\" returns successfully" Sep 4 23:52:07.607306 containerd[1516]: time="2025-09-04T23:52:07.607266392Z" level=info msg="StartContainer for \"74815b58aa548e4bc8e0aaff21c72f6e5c148dd720f0f05bd3e0e567452dfc20\" returns successfully" Sep 4 23:52:07.618665 containerd[1516]: time="2025-09-04T23:52:07.618624733Z" level=info msg="StartContainer for \"8a9c4a3d778f9bee42c70e8ec673ce2522b8641091898b6ab6db7043bef6c398\" returns successfully" Sep 4 23:52:07.911351 kubelet[2241]: E0904 23:52:07.911007 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:07.911351 kubelet[2241]: E0904 23:52:07.911148 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.919348 kubelet[2241]: E0904 23:52:07.917218 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:07.919348 kubelet[2241]: E0904 23:52:07.917318 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.919348 kubelet[2241]: E0904 23:52:07.917619 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:07.919348 kubelet[2241]: E0904 23:52:07.917746 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:08.315593 kubelet[2241]: I0904 23:52:08.315564 2241 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:08.958027 kubelet[2241]: E0904 23:52:08.957980 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:08.958660 kubelet[2241]: E0904 23:52:08.958111 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:08.958660 kubelet[2241]: E0904 23:52:08.958220 2241 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:52:08.958660 kubelet[2241]: E0904 23:52:08.958379 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:09.097641 kubelet[2241]: E0904 23:52:09.097576 2241 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 23:52:09.191776 kubelet[2241]: I0904 23:52:09.191738 2241 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:52:09.279289 kubelet[2241]: I0904 23:52:09.279267 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:09.296542 kubelet[2241]: E0904 23:52:09.296484 2241 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:09.296863 kubelet[2241]: I0904 23:52:09.296722 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:09.303651 kubelet[2241]: E0904 23:52:09.303619 2241 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:09.303816 kubelet[2241]: I0904 23:52:09.303779 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:09.311546 kubelet[2241]: E0904 23:52:09.311492 2241 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:09.955523 kubelet[2241]: I0904 23:52:09.955465 2241 apiserver.go:52] "Watching apiserver" Sep 4 23:52:09.957899 kubelet[2241]: I0904 23:52:09.957488 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:09.959252 kubelet[2241]: E0904 23:52:09.959209 2241 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:09.959623 kubelet[2241]: E0904 23:52:09.959441 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:09.978559 kubelet[2241]: I0904 23:52:09.978527 2241 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:52:10.319840 kubelet[2241]: I0904 23:52:10.319806 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:10.328006 kubelet[2241]: E0904 23:52:10.327970 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:10.959312 kubelet[2241]: E0904 23:52:10.959260 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:11.356141 systemd[1]: Reload requested from client PID 2520 ('systemctl') (unit session-7.scope)... Sep 4 23:52:11.356178 systemd[1]: Reloading... Sep 4 23:52:11.356568 kubelet[2241]: I0904 23:52:11.356538 2241 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:11.362223 kubelet[2241]: E0904 23:52:11.362187 2241 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:11.440425 zram_generator::config[2567]: No configuration found. Sep 4 23:52:11.548917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:52:11.669389 systemd[1]: Reloading finished in 312 ms. Sep 4 23:52:11.698297 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:52:11.713940 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:52:11.714248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:11.714313 systemd[1]: kubelet.service: Consumed 1.030s CPU time, 134.2M memory peak. Sep 4 23:52:11.721566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:52:11.914689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:52:11.919967 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:52:11.965794 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:52:11.965794 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:52:11.965794 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:52:11.966309 kubelet[2609]: I0904 23:52:11.965891 2609 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:52:11.973365 kubelet[2609]: I0904 23:52:11.973315 2609 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:52:11.973365 kubelet[2609]: I0904 23:52:11.973352 2609 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:52:11.973623 kubelet[2609]: I0904 23:52:11.973595 2609 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:52:11.974730 kubelet[2609]: I0904 23:52:11.974687 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:52:11.979351 kubelet[2609]: I0904 23:52:11.976697 2609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:52:11.984100 kubelet[2609]: E0904 23:52:11.984064 2609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:52:11.984444 kubelet[2609]: I0904 23:52:11.984214 2609 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:52:11.989891 kubelet[2609]: I0904 23:52:11.989849 2609 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:52:11.990215 kubelet[2609]: I0904 23:52:11.990170 2609 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:52:11.990394 kubelet[2609]: I0904 23:52:11.990206 2609 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:52:11.990493 kubelet[2609]: I0904 23:52:11.990409 2609 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:52:11.990493 kubelet[2609]: I0904 23:52:11.990421 2609 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:52:11.990493 kubelet[2609]: I0904 23:52:11.990481 2609 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:52:11.990678 kubelet[2609]: I0904 23:52:11.990652 2609 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:52:11.990731 kubelet[2609]: I0904 23:52:11.990678 2609 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:52:11.990731 kubelet[2609]: I0904 23:52:11.990699 2609 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:52:11.990731 kubelet[2609]: I0904 23:52:11.990720 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:52:11.991915 kubelet[2609]: I0904 23:52:11.991874 2609 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:52:11.992261 kubelet[2609]: I0904 23:52:11.992243 2609 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:52:11.992795 kubelet[2609]: I0904 23:52:11.992769 2609 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:52:11.992834 kubelet[2609]: I0904 23:52:11.992803 2609 server.go:1287] "Started kubelet" Sep 4 23:52:11.999366 kubelet[2609]: I0904 23:52:11.998921 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:52:11.999366 kubelet[2609]: I0904 23:52:11.999300 2609 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:52:11.999492 kubelet[2609]: I0904 23:52:11.999372 2609 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:52:12.000536 kubelet[2609]: I0904 23:52:12.000513 2609 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:52:12.006814 kubelet[2609]: I0904 23:52:12.002936 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:52:12.006814 kubelet[2609]: I0904 23:52:12.003042 2609 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:52:12.006814 kubelet[2609]: I0904 23:52:12.003047 2609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:52:12.006814 kubelet[2609]: I0904 23:52:12.003205 2609 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:52:12.006814 kubelet[2609]: I0904 23:52:12.003385 2609 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:52:12.006814 kubelet[2609]: E0904 23:52:12.004421 2609 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:52:12.010940 kubelet[2609]: I0904 23:52:12.010877 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:52:12.012628 kubelet[2609]: I0904 23:52:12.012588 2609 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:52:12.012628 kubelet[2609]: I0904 23:52:12.012609 2609 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:52:12.018507 kubelet[2609]: I0904 23:52:12.018459 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:52:12.019754 kubelet[2609]: I0904 23:52:12.019699 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:52:12.019754 kubelet[2609]: I0904 23:52:12.019750 2609 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:52:12.019861 kubelet[2609]: I0904 23:52:12.019776 2609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:52:12.019861 kubelet[2609]: I0904 23:52:12.019786 2609 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:52:12.019924 kubelet[2609]: E0904 23:52:12.019848 2609 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:52:12.053824 kubelet[2609]: I0904 23:52:12.053786 2609 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:52:12.053824 kubelet[2609]: I0904 23:52:12.053806 2609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:52:12.053824 kubelet[2609]: I0904 23:52:12.053824 2609 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:52:12.054016 kubelet[2609]: I0904 23:52:12.053977 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:52:12.054016 kubelet[2609]: I0904 23:52:12.053988 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:52:12.054016 kubelet[2609]: I0904 23:52:12.054009 2609 policy_none.go:49] "None policy: Start" Sep 4 23:52:12.054079 kubelet[2609]: I0904 23:52:12.054022 2609 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:52:12.054265 kubelet[2609]: I0904 23:52:12.054036 2609 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:52:12.054381 kubelet[2609]: I0904 23:52:12.054364 2609 state_mem.go:75] "Updated machine memory state" Sep 4 23:52:12.058327 kubelet[2609]: I0904 23:52:12.058293 2609 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:52:12.058608 kubelet[2609]: I0904 23:52:12.058483 2609 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:52:12.058608 kubelet[2609]: I0904 23:52:12.058497 2609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:52:12.059215 kubelet[2609]: I0904 23:52:12.059172 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:52:12.066263 kubelet[2609]: E0904 23:52:12.066230 2609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:52:12.120524 kubelet[2609]: I0904 23:52:12.120476 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:12.120667 kubelet[2609]: I0904 23:52:12.120627 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.120667 kubelet[2609]: I0904 23:52:12.120646 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:12.127109 kubelet[2609]: E0904 23:52:12.126737 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.127258 kubelet[2609]: E0904 23:52:12.127226 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:12.167523 kubelet[2609]: I0904 23:52:12.167491 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:52:12.173961 kubelet[2609]: I0904 23:52:12.173814 2609 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 23:52:12.173961 kubelet[2609]: I0904 23:52:12.173897 2609 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:52:12.305175 kubelet[2609]: I0904 23:52:12.305121 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.305175 kubelet[2609]: I0904 23:52:12.305155 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.305175 kubelet[2609]: I0904 23:52:12.305181 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.305465 kubelet[2609]: I0904 23:52:12.305205 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.305465 kubelet[2609]: I0904 23:52:12.305278 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:12.305465 kubelet[2609]: I0904 23:52:12.305372 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:12.305465 kubelet[2609]: I0904 23:52:12.305404 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:12.305465 kubelet[2609]: I0904 23:52:12.305427 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:12.305652 kubelet[2609]: I0904 23:52:12.305451 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c05d83c18a5f502d42a340085eb63104-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c05d83c18a5f502d42a340085eb63104\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:12.366053 sudo[2647]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:52:12.366574 sudo[2647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:52:12.429515 kubelet[2609]: E0904 23:52:12.428917 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:12.429515 kubelet[2609]: E0904 23:52:12.428921 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:12.436402 kubelet[2609]: E0904 23:52:12.430104 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:12.992837 kubelet[2609]: I0904 23:52:12.991983 2609 apiserver.go:52] "Watching apiserver" Sep 4 23:52:13.004282 kubelet[2609]: I0904 23:52:13.004244 2609 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:52:13.034108 kubelet[2609]: I0904 23:52:13.034075 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:13.034478 kubelet[2609]: I0904 23:52:13.034441 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:13.034614 kubelet[2609]: I0904 23:52:13.034523 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:13.040401 kubelet[2609]: E0904 23:52:13.040365 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:52:13.040401 kubelet[2609]: E0904 23:52:13.040715 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:13.041564 kubelet[2609]: E0904 23:52:13.041545 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:52:13.041718 kubelet[2609]: E0904 23:52:13.041704 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:13.041925 kubelet[2609]: E0904 23:52:13.041906 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:52:13.042087 kubelet[2609]: E0904 23:52:13.042073 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:13.044055 sudo[2647]: pam_unix(sudo:session): session closed for user root Sep 4 23:52:13.061611 kubelet[2609]: I0904 23:52:13.061313 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.061259883 podStartE2EDuration="3.061259883s" podCreationTimestamp="2025-09-04 23:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:13.059804306 +0000 UTC m=+1.135167225" watchObservedRunningTime="2025-09-04 23:52:13.061259883 +0000 UTC m=+1.136622801" Sep 4 23:52:13.083374 kubelet[2609]: I0904 23:52:13.083133 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.083114724 podStartE2EDuration="2.083114724s" podCreationTimestamp="2025-09-04 23:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:13.072747052 +0000 UTC m=+1.148109981" watchObservedRunningTime="2025-09-04 23:52:13.083114724 +0000 UTC m=+1.158477642" Sep 4 23:52:13.094766 kubelet[2609]: I0904 23:52:13.094026 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.094004941 podStartE2EDuration="1.094004941s" podCreationTimestamp="2025-09-04 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:13.083376569 +0000 UTC m=+1.158739477" watchObservedRunningTime="2025-09-04 23:52:13.094004941 +0000 UTC m=+1.169367860" Sep 4 23:52:14.037586 kubelet[2609]: E0904 23:52:14.037536 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:14.038519 kubelet[2609]: E0904 23:52:14.037897 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:14.038519 kubelet[2609]: E0904 23:52:14.038444 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:14.511970 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 4 23:52:14.513757 sshd[1701]: Connection closed by 10.0.0.1 port 58114 Sep 4 23:52:14.514232 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:14.518190 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:58114.service: Deactivated successfully. Sep 4 23:52:14.520554 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:52:14.520779 systemd[1]: session-7.scope: Consumed 6.178s CPU time, 251.5M memory peak. Sep 4 23:52:14.522162 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:52:14.523054 systemd-logind[1499]: Removed session 7. Sep 4 23:52:15.039055 kubelet[2609]: E0904 23:52:15.039015 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:16.040172 kubelet[2609]: E0904 23:52:16.040121 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:16.085678 kubelet[2609]: E0904 23:52:16.085640 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:17.511695 kubelet[2609]: I0904 23:52:17.511651 2609 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:52:17.512210 containerd[1516]: time="2025-09-04T23:52:17.512010255Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:52:17.512574 kubelet[2609]: I0904 23:52:17.512207 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:52:17.942136 kubelet[2609]: I0904 23:52:17.942077 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72478b35-6f7d-43bb-9106-e620b0bef7a7-lib-modules\") pod \"kube-proxy-gqpgh\" (UID: \"72478b35-6f7d-43bb-9106-e620b0bef7a7\") " pod="kube-system/kube-proxy-gqpgh" Sep 4 23:52:17.942136 kubelet[2609]: I0904 23:52:17.942120 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72478b35-6f7d-43bb-9106-e620b0bef7a7-kube-proxy\") pod \"kube-proxy-gqpgh\" (UID: \"72478b35-6f7d-43bb-9106-e620b0bef7a7\") " pod="kube-system/kube-proxy-gqpgh" Sep 4 23:52:17.942136 kubelet[2609]: I0904 23:52:17.942142 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72478b35-6f7d-43bb-9106-e620b0bef7a7-xtables-lock\") pod \"kube-proxy-gqpgh\" (UID: \"72478b35-6f7d-43bb-9106-e620b0bef7a7\") " pod="kube-system/kube-proxy-gqpgh" Sep 4 23:52:17.942432 kubelet[2609]: I0904 23:52:17.942160 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd65p\" (UniqueName: \"kubernetes.io/projected/72478b35-6f7d-43bb-9106-e620b0bef7a7-kube-api-access-gd65p\") pod \"kube-proxy-gqpgh\" (UID: \"72478b35-6f7d-43bb-9106-e620b0bef7a7\") " pod="kube-system/kube-proxy-gqpgh" Sep 4 23:52:17.944501 systemd[1]: Created slice kubepods-besteffort-pod72478b35_6f7d_43bb_9106_e620b0bef7a7.slice - libcontainer container kubepods-besteffort-pod72478b35_6f7d_43bb_9106_e620b0bef7a7.slice. Sep 4 23:52:17.964817 systemd[1]: Created slice kubepods-burstable-podbd8c77ca_db19_45b9_adb4_0f0791d8d498.slice - libcontainer container kubepods-burstable-podbd8c77ca_db19_45b9_adb4_0f0791d8d498.slice. Sep 4 23:52:18.043305 kubelet[2609]: I0904 23:52:18.043262 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-net\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043305 kubelet[2609]: I0904 23:52:18.043305 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-etc-cni-netd\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043542 kubelet[2609]: I0904 23:52:18.043351 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hostproc\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043542 kubelet[2609]: I0904 23:52:18.043377 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-kernel\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043542 kubelet[2609]: I0904 23:52:18.043491 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-xtables-lock\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043542 kubelet[2609]: I0904 23:52:18.043529 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd8c77ca-db19-45b9-adb4-0f0791d8d498-clustermesh-secrets\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043671 kubelet[2609]: I0904 23:52:18.043552 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-config-path\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043671 kubelet[2609]: I0904 23:52:18.043593 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-bpf-maps\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043671 kubelet[2609]: I0904 23:52:18.043615 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-cgroup\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043671 kubelet[2609]: I0904 23:52:18.043667 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-lib-modules\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043815 kubelet[2609]: I0904 23:52:18.043689 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8l2b\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043815 kubelet[2609]: I0904 23:52:18.043725 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-run\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043815 kubelet[2609]: I0904 23:52:18.043778 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cni-path\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.043815 kubelet[2609]: I0904 23:52:18.043798 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hubble-tls\") pod \"cilium-gm5cl\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " pod="kube-system/cilium-gm5cl" Sep 4 23:52:18.048863 kubelet[2609]: E0904 23:52:18.048837 2609 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 23:52:18.048934 kubelet[2609]: E0904 23:52:18.048867 2609 projected.go:194] Error preparing data for projected volume kube-api-access-gd65p for pod kube-system/kube-proxy-gqpgh: configmap "kube-root-ca.crt" not found Sep 4 23:52:18.048934 kubelet[2609]: E0904 23:52:18.048922 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72478b35-6f7d-43bb-9106-e620b0bef7a7-kube-api-access-gd65p podName:72478b35-6f7d-43bb-9106-e620b0bef7a7 nodeName:}" failed. No retries permitted until 2025-09-04 23:52:18.548903638 +0000 UTC m=+6.624266556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gd65p" (UniqueName: "kubernetes.io/projected/72478b35-6f7d-43bb-9106-e620b0bef7a7-kube-api-access-gd65p") pod "kube-proxy-gqpgh" (UID: "72478b35-6f7d-43bb-9106-e620b0bef7a7") : configmap "kube-root-ca.crt" not found Sep 4 23:52:18.151858 kubelet[2609]: E0904 23:52:18.151796 2609 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 23:52:18.151858 kubelet[2609]: E0904 23:52:18.151844 2609 projected.go:194] Error preparing data for projected volume kube-api-access-k8l2b for pod kube-system/cilium-gm5cl: configmap "kube-root-ca.crt" not found Sep 4 23:52:18.152054 kubelet[2609]: E0904 23:52:18.151913 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b podName:bd8c77ca-db19-45b9-adb4-0f0791d8d498 nodeName:}" failed. No retries permitted until 2025-09-04 23:52:18.651890756 +0000 UTC m=+6.727253674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8l2b" (UniqueName: "kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b") pod "cilium-gm5cl" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498") : configmap "kube-root-ca.crt" not found Sep 4 23:52:18.665097 systemd[1]: Created slice kubepods-besteffort-podd6b5e592_27cc_4834_a1c8_0c4c5d79b21b.slice - libcontainer container kubepods-besteffort-podd6b5e592_27cc_4834_a1c8_0c4c5d79b21b.slice. Sep 4 23:52:18.749660 kubelet[2609]: I0904 23:52:18.749611 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459fv\" (UniqueName: \"kubernetes.io/projected/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-kube-api-access-459fv\") pod \"cilium-operator-6c4d7847fc-jlskx\" (UID: \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\") " pod="kube-system/cilium-operator-6c4d7847fc-jlskx" Sep 4 23:52:18.750030 kubelet[2609]: I0904 23:52:18.749673 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jlskx\" (UID: \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\") " pod="kube-system/cilium-operator-6c4d7847fc-jlskx" Sep 4 23:52:18.863850 kubelet[2609]: E0904 23:52:18.863804 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:18.864532 containerd[1516]: time="2025-09-04T23:52:18.864384948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqpgh,Uid:72478b35-6f7d-43bb-9106-e620b0bef7a7,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:18.868227 kubelet[2609]: E0904 23:52:18.868193 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:18.868596 containerd[1516]: time="2025-09-04T23:52:18.868548615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5cl,Uid:bd8c77ca-db19-45b9-adb4-0f0791d8d498,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:18.921960 containerd[1516]: time="2025-09-04T23:52:18.921483482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:18.921960 containerd[1516]: time="2025-09-04T23:52:18.921573034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:18.921960 containerd[1516]: time="2025-09-04T23:52:18.921587711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:18.921960 containerd[1516]: time="2025-09-04T23:52:18.921694956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:18.935989 containerd[1516]: time="2025-09-04T23:52:18.935684648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:18.935989 containerd[1516]: time="2025-09-04T23:52:18.935753058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:18.935989 containerd[1516]: time="2025-09-04T23:52:18.935767837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:18.935989 containerd[1516]: time="2025-09-04T23:52:18.935891584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:18.943873 systemd[1]: Started cri-containerd-0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24.scope - libcontainer container 0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24. Sep 4 23:52:18.965480 systemd[1]: Started cri-containerd-49b055e80f8faded717a333ed1406e0aa2b29c35786d6d3d902e7d296e0f1f05.scope - libcontainer container 49b055e80f8faded717a333ed1406e0aa2b29c35786d6d3d902e7d296e0f1f05. Sep 4 23:52:18.967788 kubelet[2609]: E0904 23:52:18.967759 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:18.969689 containerd[1516]: time="2025-09-04T23:52:18.969383033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jlskx,Uid:d6b5e592-27cc-4834-a1c8-0c4c5d79b21b,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:18.976650 containerd[1516]: time="2025-09-04T23:52:18.976611686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5cl,Uid:bd8c77ca-db19-45b9-adb4-0f0791d8d498,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\"" Sep 4 23:52:18.977422 kubelet[2609]: E0904 23:52:18.977400 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:18.978573 containerd[1516]: time="2025-09-04T23:52:18.978525329Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:52:19.000677 containerd[1516]: time="2025-09-04T23:52:19.000640778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqpgh,Uid:72478b35-6f7d-43bb-9106-e620b0bef7a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b055e80f8faded717a333ed1406e0aa2b29c35786d6d3d902e7d296e0f1f05\"" Sep 4 23:52:19.001754 kubelet[2609]: E0904 23:52:19.001731 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:19.004683 containerd[1516]: time="2025-09-04T23:52:19.004360807Z" level=info msg="CreateContainer within sandbox \"49b055e80f8faded717a333ed1406e0aa2b29c35786d6d3d902e7d296e0f1f05\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:52:19.006803 containerd[1516]: time="2025-09-04T23:52:19.005561924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:19.006803 containerd[1516]: time="2025-09-04T23:52:19.005607050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:19.006803 containerd[1516]: time="2025-09-04T23:52:19.005622950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:19.006803 containerd[1516]: time="2025-09-04T23:52:19.005714886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:19.028524 systemd[1]: Started cri-containerd-5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3.scope - libcontainer container 5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3. Sep 4 23:52:19.029638 containerd[1516]: time="2025-09-04T23:52:19.029547426Z" level=info msg="CreateContainer within sandbox \"49b055e80f8faded717a333ed1406e0aa2b29c35786d6d3d902e7d296e0f1f05\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26a06a40fdb9c1e85b61750c290c81329d63985ae5ae700506a8baa3bfd6ba98\"" Sep 4 23:52:19.030303 containerd[1516]: time="2025-09-04T23:52:19.030282170Z" level=info msg="StartContainer for \"26a06a40fdb9c1e85b61750c290c81329d63985ae5ae700506a8baa3bfd6ba98\"" Sep 4 23:52:19.033682 kubelet[2609]: E0904 23:52:19.033660 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:19.059570 kubelet[2609]: E0904 23:52:19.059537 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:19.088520 systemd[1]: Started cri-containerd-26a06a40fdb9c1e85b61750c290c81329d63985ae5ae700506a8baa3bfd6ba98.scope - libcontainer container 26a06a40fdb9c1e85b61750c290c81329d63985ae5ae700506a8baa3bfd6ba98. Sep 4 23:52:19.089463 containerd[1516]: time="2025-09-04T23:52:19.089080607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jlskx,Uid:d6b5e592-27cc-4834-a1c8-0c4c5d79b21b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\"" Sep 4 23:52:19.090170 kubelet[2609]: E0904 23:52:19.089761 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:19.124376 containerd[1516]: time="2025-09-04T23:52:19.123816074Z" level=info msg="StartContainer for \"26a06a40fdb9c1e85b61750c290c81329d63985ae5ae700506a8baa3bfd6ba98\" returns successfully" Sep 4 23:52:20.062167 kubelet[2609]: E0904 23:52:20.062121 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:20.075017 kubelet[2609]: I0904 23:52:20.074916 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqpgh" podStartSLOduration=3.074889525 podStartE2EDuration="3.074889525s" podCreationTimestamp="2025-09-04 23:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:20.074757362 +0000 UTC m=+8.150120290" watchObservedRunningTime="2025-09-04 23:52:20.074889525 +0000 UTC m=+8.150252443" Sep 4 23:52:21.065029 kubelet[2609]: E0904 23:52:21.064992 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:24.693080 update_engine[1509]: I20250904 23:52:24.692967 1509 update_attempter.cc:509] Updating boot flags... Sep 4 23:52:24.787384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2987) Sep 4 23:52:24.868888 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2990) Sep 4 23:52:24.915409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2990) Sep 4 23:52:25.914251 kubelet[2609]: E0904 23:52:25.913919 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:26.070641 kubelet[2609]: E0904 23:52:26.070595 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:26.090186 kubelet[2609]: E0904 23:52:26.090141 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:26.593266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555834687.mount: Deactivated successfully. Sep 4 23:52:29.753686 containerd[1516]: time="2025-09-04T23:52:29.753628298Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:29.754463 containerd[1516]: time="2025-09-04T23:52:29.754393688Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:52:29.755598 containerd[1516]: time="2025-09-04T23:52:29.755562141Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:29.757497 containerd[1516]: time="2025-09-04T23:52:29.757460507Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.778898869s" Sep 4 23:52:29.757539 containerd[1516]: time="2025-09-04T23:52:29.757498990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:52:29.765422 containerd[1516]: time="2025-09-04T23:52:29.765400116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:52:29.776954 containerd[1516]: time="2025-09-04T23:52:29.776918884Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:52:29.790468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827828212.mount: Deactivated successfully. Sep 4 23:52:29.791797 containerd[1516]: time="2025-09-04T23:52:29.791742974Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\"" Sep 4 23:52:29.795697 containerd[1516]: time="2025-09-04T23:52:29.794625444Z" level=info msg="StartContainer for \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\"" Sep 4 23:52:29.828528 systemd[1]: Started cri-containerd-173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1.scope - libcontainer container 173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1. Sep 4 23:52:29.859366 containerd[1516]: time="2025-09-04T23:52:29.859311490Z" level=info msg="StartContainer for \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\" returns successfully" Sep 4 23:52:29.869393 systemd[1]: cri-containerd-173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1.scope: Deactivated successfully. Sep 4 23:52:30.249094 kubelet[2609]: E0904 23:52:30.249057 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:30.259766 containerd[1516]: time="2025-09-04T23:52:30.259695583Z" level=info msg="shim disconnected" id=173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1 namespace=k8s.io Sep 4 23:52:30.259766 containerd[1516]: time="2025-09-04T23:52:30.259753012Z" level=warning msg="cleaning up after shim disconnected" id=173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1 namespace=k8s.io Sep 4 23:52:30.259766 containerd[1516]: time="2025-09-04T23:52:30.259762560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:30.787902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1-rootfs.mount: Deactivated successfully. Sep 4 23:52:31.098657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136564559.mount: Deactivated successfully. Sep 4 23:52:31.250023 kubelet[2609]: E0904 23:52:31.249977 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:31.252504 containerd[1516]: time="2025-09-04T23:52:31.252472035Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:52:31.267002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745743915.mount: Deactivated successfully. Sep 4 23:52:31.273916 containerd[1516]: time="2025-09-04T23:52:31.273864949Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\"" Sep 4 23:52:31.274629 containerd[1516]: time="2025-09-04T23:52:31.274593177Z" level=info msg="StartContainer for \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\"" Sep 4 23:52:31.309641 systemd[1]: Started cri-containerd-151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469.scope - libcontainer container 151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469. Sep 4 23:52:31.340028 containerd[1516]: time="2025-09-04T23:52:31.339977146Z" level=info msg="StartContainer for \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\" returns successfully" Sep 4 23:52:31.354939 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:52:31.355189 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:52:31.356139 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:52:31.364141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:52:31.364387 systemd[1]: cri-containerd-151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469.scope: Deactivated successfully. Sep 4 23:52:31.379661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:52:31.534945 containerd[1516]: time="2025-09-04T23:52:31.534862005Z" level=info msg="shim disconnected" id=151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469 namespace=k8s.io Sep 4 23:52:31.534945 containerd[1516]: time="2025-09-04T23:52:31.534922490Z" level=warning msg="cleaning up after shim disconnected" id=151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469 namespace=k8s.io Sep 4 23:52:31.534945 containerd[1516]: time="2025-09-04T23:52:31.534931017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:31.560960 containerd[1516]: time="2025-09-04T23:52:31.560901480Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:31.561688 containerd[1516]: time="2025-09-04T23:52:31.561615060Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:52:31.562678 containerd[1516]: time="2025-09-04T23:52:31.562646531Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:31.564074 containerd[1516]: time="2025-09-04T23:52:31.564041601Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.798535294s" Sep 4 23:52:31.564156 containerd[1516]: time="2025-09-04T23:52:31.564079091Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:52:31.573837 containerd[1516]: time="2025-09-04T23:52:31.573768543Z" level=info msg="CreateContainer within sandbox \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:52:31.588623 containerd[1516]: time="2025-09-04T23:52:31.588566609Z" level=info msg="CreateContainer within sandbox \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\"" Sep 4 23:52:31.589166 containerd[1516]: time="2025-09-04T23:52:31.589141237Z" level=info msg="StartContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\"" Sep 4 23:52:31.622561 systemd[1]: Started cri-containerd-8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95.scope - libcontainer container 8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95. Sep 4 23:52:31.678817 containerd[1516]: time="2025-09-04T23:52:31.678741781Z" level=info msg="StartContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" returns successfully" Sep 4 23:52:32.255924 kubelet[2609]: E0904 23:52:32.255839 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:32.259008 kubelet[2609]: E0904 23:52:32.257956 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:32.259108 containerd[1516]: time="2025-09-04T23:52:32.258912064Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:52:32.279724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692858657.mount: Deactivated successfully. Sep 4 23:52:32.292972 containerd[1516]: time="2025-09-04T23:52:32.292912504Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\"" Sep 4 23:52:32.294123 kubelet[2609]: I0904 23:52:32.294050 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jlskx" podStartSLOduration=1.8206969979999998 podStartE2EDuration="14.294011822s" podCreationTimestamp="2025-09-04 23:52:18 +0000 UTC" firstStartedPulling="2025-09-04 23:52:19.091449815 +0000 UTC m=+7.166812733" lastFinishedPulling="2025-09-04 23:52:31.564764639 +0000 UTC m=+19.640127557" observedRunningTime="2025-09-04 23:52:32.293757351 +0000 UTC m=+20.369120269" watchObservedRunningTime="2025-09-04 23:52:32.294011822 +0000 UTC m=+20.369374740" Sep 4 23:52:32.295793 containerd[1516]: time="2025-09-04T23:52:32.295728829Z" level=info msg="StartContainer for \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\"" Sep 4 23:52:32.352485 systemd[1]: Started cri-containerd-9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845.scope - libcontainer container 9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845. Sep 4 23:52:32.396574 containerd[1516]: time="2025-09-04T23:52:32.396522425Z" level=info msg="StartContainer for \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\" returns successfully" Sep 4 23:52:32.403736 systemd[1]: cri-containerd-9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845.scope: Deactivated successfully. Sep 4 23:52:32.437542 containerd[1516]: time="2025-09-04T23:52:32.437463569Z" level=info msg="shim disconnected" id=9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845 namespace=k8s.io Sep 4 23:52:32.437542 containerd[1516]: time="2025-09-04T23:52:32.437534022Z" level=warning msg="cleaning up after shim disconnected" id=9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845 namespace=k8s.io Sep 4 23:52:32.437542 containerd[1516]: time="2025-09-04T23:52:32.437546947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:32.788043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845-rootfs.mount: Deactivated successfully. Sep 4 23:52:33.261977 kubelet[2609]: E0904 23:52:33.261765 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:33.261977 kubelet[2609]: E0904 23:52:33.261938 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:33.264126 containerd[1516]: time="2025-09-04T23:52:33.263977534Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:52:33.321532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222017878.mount: Deactivated successfully. Sep 4 23:52:33.322648 containerd[1516]: time="2025-09-04T23:52:33.322607319Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\"" Sep 4 23:52:33.323286 containerd[1516]: time="2025-09-04T23:52:33.323183437Z" level=info msg="StartContainer for \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\"" Sep 4 23:52:33.369727 systemd[1]: Started cri-containerd-ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df.scope - libcontainer container ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df. Sep 4 23:52:33.399611 systemd[1]: cri-containerd-ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df.scope: Deactivated successfully. Sep 4 23:52:33.401145 containerd[1516]: time="2025-09-04T23:52:33.401094700Z" level=info msg="StartContainer for \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\" returns successfully" Sep 4 23:52:33.427098 containerd[1516]: time="2025-09-04T23:52:33.427030787Z" level=info msg="shim disconnected" id=ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df namespace=k8s.io Sep 4 23:52:33.427098 containerd[1516]: time="2025-09-04T23:52:33.427090881Z" level=warning msg="cleaning up after shim disconnected" id=ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df namespace=k8s.io Sep 4 23:52:33.427098 containerd[1516]: time="2025-09-04T23:52:33.427101771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:33.788254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df-rootfs.mount: Deactivated successfully. Sep 4 23:52:34.266064 kubelet[2609]: E0904 23:52:34.265925 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:34.267956 containerd[1516]: time="2025-09-04T23:52:34.267907257Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:52:34.294156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063065744.mount: Deactivated successfully. Sep 4 23:52:34.297313 containerd[1516]: time="2025-09-04T23:52:34.297266296Z" level=info msg="CreateContainer within sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\"" Sep 4 23:52:34.297912 containerd[1516]: time="2025-09-04T23:52:34.297873212Z" level=info msg="StartContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\"" Sep 4 23:52:34.329531 systemd[1]: Started cri-containerd-cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b.scope - libcontainer container cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b. Sep 4 23:52:34.365001 containerd[1516]: time="2025-09-04T23:52:34.364953837Z" level=info msg="StartContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" returns successfully" Sep 4 23:52:34.554950 kubelet[2609]: I0904 23:52:34.554907 2609 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:52:34.588905 systemd[1]: Created slice kubepods-burstable-pod8becfe4a_d735_4090_97ed_fdc560083136.slice - libcontainer container kubepods-burstable-pod8becfe4a_d735_4090_97ed_fdc560083136.slice. Sep 4 23:52:34.598506 systemd[1]: Created slice kubepods-burstable-pod1c63db89_14af_472c_b4d1_664fbfeebbf4.slice - libcontainer container kubepods-burstable-pod1c63db89_14af_472c_b4d1_664fbfeebbf4.slice. Sep 4 23:52:34.650477 kubelet[2609]: I0904 23:52:34.650387 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2xw\" (UniqueName: \"kubernetes.io/projected/8becfe4a-d735-4090-97ed-fdc560083136-kube-api-access-dj2xw\") pod \"coredns-668d6bf9bc-4nr9z\" (UID: \"8becfe4a-d735-4090-97ed-fdc560083136\") " pod="kube-system/coredns-668d6bf9bc-4nr9z" Sep 4 23:52:34.650477 kubelet[2609]: I0904 23:52:34.650482 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c63db89-14af-472c-b4d1-664fbfeebbf4-config-volume\") pod \"coredns-668d6bf9bc-hd4hh\" (UID: \"1c63db89-14af-472c-b4d1-664fbfeebbf4\") " pod="kube-system/coredns-668d6bf9bc-hd4hh" Sep 4 23:52:34.650752 kubelet[2609]: I0904 23:52:34.650511 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb5zf\" (UniqueName: \"kubernetes.io/projected/1c63db89-14af-472c-b4d1-664fbfeebbf4-kube-api-access-qb5zf\") pod \"coredns-668d6bf9bc-hd4hh\" (UID: \"1c63db89-14af-472c-b4d1-664fbfeebbf4\") " pod="kube-system/coredns-668d6bf9bc-hd4hh" Sep 4 23:52:34.650752 kubelet[2609]: I0904 23:52:34.650560 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8becfe4a-d735-4090-97ed-fdc560083136-config-volume\") pod \"coredns-668d6bf9bc-4nr9z\" (UID: \"8becfe4a-d735-4090-97ed-fdc560083136\") " pod="kube-system/coredns-668d6bf9bc-4nr9z" Sep 4 23:52:34.895358 kubelet[2609]: E0904 23:52:34.895189 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:34.895907 containerd[1516]: time="2025-09-04T23:52:34.895866039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4nr9z,Uid:8becfe4a-d735-4090-97ed-fdc560083136,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:34.902874 kubelet[2609]: E0904 23:52:34.902636 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:34.903393 containerd[1516]: time="2025-09-04T23:52:34.903357520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hd4hh,Uid:1c63db89-14af-472c-b4d1-664fbfeebbf4,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:35.270666 kubelet[2609]: E0904 23:52:35.270508 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:35.291988 kubelet[2609]: I0904 23:52:35.291897 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gm5cl" podStartSLOduration=7.50488945 podStartE2EDuration="18.29187278s" podCreationTimestamp="2025-09-04 23:52:17 +0000 UTC" firstStartedPulling="2025-09-04 23:52:18.978161793 +0000 UTC m=+7.053524711" lastFinishedPulling="2025-09-04 23:52:29.765145123 +0000 UTC m=+17.840508041" observedRunningTime="2025-09-04 23:52:35.289894625 +0000 UTC m=+23.365257564" watchObservedRunningTime="2025-09-04 23:52:35.29187278 +0000 UTC m=+23.367235698" Sep 4 23:52:36.272971 kubelet[2609]: E0904 23:52:36.272933 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:36.715891 systemd-networkd[1431]: cilium_host: Link UP Sep 4 23:52:36.716054 systemd-networkd[1431]: cilium_net: Link UP Sep 4 23:52:36.716252 systemd-networkd[1431]: cilium_net: Gained carrier Sep 4 23:52:36.716451 systemd-networkd[1431]: cilium_host: Gained carrier Sep 4 23:52:36.843191 systemd-networkd[1431]: cilium_vxlan: Link UP Sep 4 23:52:36.843206 systemd-networkd[1431]: cilium_vxlan: Gained carrier Sep 4 23:52:37.073373 kernel: NET: Registered PF_ALG protocol family Sep 4 23:52:37.093052 systemd-networkd[1431]: cilium_net: Gained IPv6LL Sep 4 23:52:37.162986 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:39330.service - OpenSSH per-connection server daemon (10.0.0.1:39330). Sep 4 23:52:37.205988 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 39330 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:37.208116 sshd-session[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:37.213405 systemd-logind[1499]: New session 8 of user core. Sep 4 23:52:37.227552 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:52:37.285019 kubelet[2609]: E0904 23:52:37.284949 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:37.381640 sshd[3570]: Connection closed by 10.0.0.1 port 39330 Sep 4 23:52:37.382003 sshd-session[3568]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:37.386652 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:39330.service: Deactivated successfully. Sep 4 23:52:37.390250 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:52:37.391480 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:52:37.392905 systemd-logind[1499]: Removed session 8. Sep 4 23:52:37.595508 systemd-networkd[1431]: cilium_host: Gained IPv6LL Sep 4 23:52:37.824619 systemd-networkd[1431]: lxc_health: Link UP Sep 4 23:52:37.825543 systemd-networkd[1431]: lxc_health: Gained carrier Sep 4 23:52:37.969423 systemd-networkd[1431]: lxcb0e341df9ed8: Link UP Sep 4 23:52:37.977397 kernel: eth0: renamed from tmpf0c70 Sep 4 23:52:37.986729 systemd-networkd[1431]: lxcb0e341df9ed8: Gained carrier Sep 4 23:52:37.988031 systemd-networkd[1431]: lxcb19bd4985e92: Link UP Sep 4 23:52:37.994366 kernel: eth0: renamed from tmp3720f Sep 4 23:52:37.998749 systemd-networkd[1431]: lxcb19bd4985e92: Gained carrier Sep 4 23:52:38.235613 systemd-networkd[1431]: cilium_vxlan: Gained IPv6LL Sep 4 23:52:38.870022 kubelet[2609]: E0904 23:52:38.869923 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:39.278273 kubelet[2609]: E0904 23:52:39.278233 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:39.771503 systemd-networkd[1431]: lxc_health: Gained IPv6LL Sep 4 23:52:39.835589 systemd-networkd[1431]: lxcb19bd4985e92: Gained IPv6LL Sep 4 23:52:40.027577 systemd-networkd[1431]: lxcb0e341df9ed8: Gained IPv6LL Sep 4 23:52:40.280033 kubelet[2609]: E0904 23:52:40.279885 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:41.499653 containerd[1516]: time="2025-09-04T23:52:41.499471664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:41.499653 containerd[1516]: time="2025-09-04T23:52:41.499576542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:41.499653 containerd[1516]: time="2025-09-04T23:52:41.499599235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:41.500161 containerd[1516]: time="2025-09-04T23:52:41.499766219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:41.500161 containerd[1516]: time="2025-09-04T23:52:41.500055224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:41.501052 containerd[1516]: time="2025-09-04T23:52:41.500839391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:41.501052 containerd[1516]: time="2025-09-04T23:52:41.500862735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:41.501052 containerd[1516]: time="2025-09-04T23:52:41.500976419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:41.534575 systemd[1]: Started cri-containerd-3720fd181a8fdbd3c619c533ff7b6fa300b331c00cac54047040852d26d46f44.scope - libcontainer container 3720fd181a8fdbd3c619c533ff7b6fa300b331c00cac54047040852d26d46f44. Sep 4 23:52:41.536501 systemd[1]: Started cri-containerd-f0c707e312ace9faf3022af3902110dae28aac4f13dcb5f86483edd60040fcbe.scope - libcontainer container f0c707e312ace9faf3022af3902110dae28aac4f13dcb5f86483edd60040fcbe. Sep 4 23:52:41.549808 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:41.552421 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:41.577215 containerd[1516]: time="2025-09-04T23:52:41.577108254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hd4hh,Uid:1c63db89-14af-472c-b4d1-664fbfeebbf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3720fd181a8fdbd3c619c533ff7b6fa300b331c00cac54047040852d26d46f44\"" Sep 4 23:52:41.578307 kubelet[2609]: E0904 23:52:41.578275 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:41.580807 containerd[1516]: time="2025-09-04T23:52:41.580735478Z" level=info msg="CreateContainer within sandbox \"3720fd181a8fdbd3c619c533ff7b6fa300b331c00cac54047040852d26d46f44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:52:41.580807 containerd[1516]: time="2025-09-04T23:52:41.580795611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4nr9z,Uid:8becfe4a-d735-4090-97ed-fdc560083136,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0c707e312ace9faf3022af3902110dae28aac4f13dcb5f86483edd60040fcbe\"" Sep 4 23:52:41.581460 kubelet[2609]: E0904 23:52:41.581437 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:41.583735 containerd[1516]: time="2025-09-04T23:52:41.583682990Z" level=info msg="CreateContainer within sandbox \"f0c707e312ace9faf3022af3902110dae28aac4f13dcb5f86483edd60040fcbe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:52:41.623514 containerd[1516]: time="2025-09-04T23:52:41.623456816Z" level=info msg="CreateContainer within sandbox \"f0c707e312ace9faf3022af3902110dae28aac4f13dcb5f86483edd60040fcbe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b4a5cbd5e11d8606955dbea752b3d7626264245cfc7a3be09100065cafe01cb\"" Sep 4 23:52:41.624075 containerd[1516]: time="2025-09-04T23:52:41.623970593Z" level=info msg="StartContainer for \"2b4a5cbd5e11d8606955dbea752b3d7626264245cfc7a3be09100065cafe01cb\"" Sep 4 23:52:41.625542 containerd[1516]: time="2025-09-04T23:52:41.625508351Z" level=info msg="CreateContainer within sandbox \"3720fd181a8fdbd3c619c533ff7b6fa300b331c00cac54047040852d26d46f44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d909c03b0d33f954b80dfba9858a3104e07a627da4c718935b22a14eeec4163\"" Sep 4 23:52:41.626359 containerd[1516]: time="2025-09-04T23:52:41.625952338Z" level=info msg="StartContainer for \"8d909c03b0d33f954b80dfba9858a3104e07a627da4c718935b22a14eeec4163\"" Sep 4 23:52:41.655492 systemd[1]: Started cri-containerd-2b4a5cbd5e11d8606955dbea752b3d7626264245cfc7a3be09100065cafe01cb.scope - libcontainer container 2b4a5cbd5e11d8606955dbea752b3d7626264245cfc7a3be09100065cafe01cb. Sep 4 23:52:41.659818 systemd[1]: Started cri-containerd-8d909c03b0d33f954b80dfba9858a3104e07a627da4c718935b22a14eeec4163.scope - libcontainer container 8d909c03b0d33f954b80dfba9858a3104e07a627da4c718935b22a14eeec4163. Sep 4 23:52:41.942971 containerd[1516]: time="2025-09-04T23:52:41.942921215Z" level=info msg="StartContainer for \"8d909c03b0d33f954b80dfba9858a3104e07a627da4c718935b22a14eeec4163\" returns successfully" Sep 4 23:52:41.943262 containerd[1516]: time="2025-09-04T23:52:41.942939110Z" level=info msg="StartContainer for \"2b4a5cbd5e11d8606955dbea752b3d7626264245cfc7a3be09100065cafe01cb\" returns successfully" Sep 4 23:52:42.285745 kubelet[2609]: E0904 23:52:42.285392 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:42.288040 kubelet[2609]: E0904 23:52:42.287966 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:42.332828 kubelet[2609]: I0904 23:52:42.332736 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4nr9z" podStartSLOduration=24.332709879 podStartE2EDuration="24.332709879s" podCreationTimestamp="2025-09-04 23:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:42.332202193 +0000 UTC m=+30.407565141" watchObservedRunningTime="2025-09-04 23:52:42.332709879 +0000 UTC m=+30.408072807" Sep 4 23:52:42.333283 kubelet[2609]: I0904 23:52:42.332861 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hd4hh" podStartSLOduration=24.33285367 podStartE2EDuration="24.33285367s" podCreationTimestamp="2025-09-04 23:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:42.322644312 +0000 UTC m=+30.398007250" watchObservedRunningTime="2025-09-04 23:52:42.33285367 +0000 UTC m=+30.408216598" Sep 4 23:52:42.393916 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). Sep 4 23:52:42.434153 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:42.436085 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:42.440770 systemd-logind[1499]: New session 9 of user core. Sep 4 23:52:42.456480 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:52:42.655258 sshd[4027]: Connection closed by 10.0.0.1 port 34862 Sep 4 23:52:42.655665 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:42.660392 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:34862.service: Deactivated successfully. Sep 4 23:52:42.662688 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:52:42.663666 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:52:42.664841 systemd-logind[1499]: Removed session 9. Sep 4 23:52:43.290056 kubelet[2609]: E0904 23:52:43.290014 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:43.290678 kubelet[2609]: E0904 23:52:43.290075 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:44.291984 kubelet[2609]: E0904 23:52:44.291844 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:44.291984 kubelet[2609]: E0904 23:52:44.291910 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:47.670296 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Sep 4 23:52:47.709940 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:47.711865 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:47.716601 systemd-logind[1499]: New session 10 of user core. Sep 4 23:52:47.727465 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:52:47.903482 sshd[4047]: Connection closed by 10.0.0.1 port 34868 Sep 4 23:52:47.903895 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:47.908402 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:34868.service: Deactivated successfully. Sep 4 23:52:47.910568 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:52:47.911402 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:52:47.913135 systemd-logind[1499]: Removed session 10. Sep 4 23:52:52.916710 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:46886.service - OpenSSH per-connection server daemon (10.0.0.1:46886). Sep 4 23:52:52.955203 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 46886 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:52.956632 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:52.960660 systemd-logind[1499]: New session 11 of user core. Sep 4 23:52:52.977500 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:52:53.089040 sshd[4067]: Connection closed by 10.0.0.1 port 46886 Sep 4 23:52:53.089414 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:53.106105 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:46886.service: Deactivated successfully. Sep 4 23:52:53.109541 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:52:53.112417 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:52:53.123316 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:46900.service - OpenSSH per-connection server daemon (10.0.0.1:46900). Sep 4 23:52:53.126317 systemd-logind[1499]: Removed session 11. Sep 4 23:52:53.165356 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 46900 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:53.166833 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:53.171448 systemd-logind[1499]: New session 12 of user core. Sep 4 23:52:53.182522 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:52:53.348143 sshd[4083]: Connection closed by 10.0.0.1 port 46900 Sep 4 23:52:53.348614 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:53.357674 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:46900.service: Deactivated successfully. Sep 4 23:52:53.359827 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:52:53.365017 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:52:53.374786 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Sep 4 23:52:53.376634 systemd-logind[1499]: Removed session 12. Sep 4 23:52:53.414099 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:53.415877 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:53.420949 systemd-logind[1499]: New session 13 of user core. Sep 4 23:52:53.436535 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:52:53.566482 sshd[4096]: Connection closed by 10.0.0.1 port 46914 Sep 4 23:52:53.566904 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:53.571581 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:46914.service: Deactivated successfully. Sep 4 23:52:53.573995 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:52:53.574760 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:52:53.576101 systemd-logind[1499]: Removed session 13. Sep 4 23:52:58.579893 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:46916.service - OpenSSH per-connection server daemon (10.0.0.1:46916). Sep 4 23:52:58.619943 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 46916 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:52:58.621686 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:58.626071 systemd-logind[1499]: New session 14 of user core. Sep 4 23:52:58.634512 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:52:58.756215 sshd[4113]: Connection closed by 10.0.0.1 port 46916 Sep 4 23:52:58.756649 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:58.760489 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:46916.service: Deactivated successfully. Sep 4 23:52:58.762774 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:52:58.763621 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:52:58.764699 systemd-logind[1499]: Removed session 14. Sep 4 23:53:03.770031 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:46588.service - OpenSSH per-connection server daemon (10.0.0.1:46588). Sep 4 23:53:03.812426 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 46588 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:03.814143 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:03.818812 systemd-logind[1499]: New session 15 of user core. Sep 4 23:53:03.831615 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:53:03.959612 sshd[4128]: Connection closed by 10.0.0.1 port 46588 Sep 4 23:53:03.960590 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:03.972605 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:46588.service: Deactivated successfully. Sep 4 23:53:03.975107 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:53:03.976801 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:53:03.982706 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:46604.service - OpenSSH per-connection server daemon (10.0.0.1:46604). Sep 4 23:53:03.983793 systemd-logind[1499]: Removed session 15. Sep 4 23:53:04.024934 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:04.026799 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:04.031495 systemd-logind[1499]: New session 16 of user core. Sep 4 23:53:04.041465 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:53:04.275125 sshd[4144]: Connection closed by 10.0.0.1 port 46604 Sep 4 23:53:04.275960 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:04.292522 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:46604.service: Deactivated successfully. Sep 4 23:53:04.294789 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:53:04.296547 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:53:04.309648 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:46614.service - OpenSSH per-connection server daemon (10.0.0.1:46614). Sep 4 23:53:04.310693 systemd-logind[1499]: Removed session 16. Sep 4 23:53:04.349560 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 46614 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:04.351535 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:04.356829 systemd-logind[1499]: New session 17 of user core. Sep 4 23:53:04.366583 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:53:05.007324 sshd[4157]: Connection closed by 10.0.0.1 port 46614 Sep 4 23:53:05.009392 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:05.021822 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:46614.service: Deactivated successfully. Sep 4 23:53:05.024602 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:53:05.028069 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:53:05.036914 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:46618.service - OpenSSH per-connection server daemon (10.0.0.1:46618). Sep 4 23:53:05.038609 systemd-logind[1499]: Removed session 17. Sep 4 23:53:05.074570 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 46618 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:05.076448 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:05.082090 systemd-logind[1499]: New session 18 of user core. Sep 4 23:53:05.095483 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:53:05.370163 sshd[4178]: Connection closed by 10.0.0.1 port 46618 Sep 4 23:53:05.370718 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:05.382800 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:46618.service: Deactivated successfully. Sep 4 23:53:05.385490 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:53:05.386386 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:53:05.398739 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:46622.service - OpenSSH per-connection server daemon (10.0.0.1:46622). Sep 4 23:53:05.399647 systemd-logind[1499]: Removed session 18. Sep 4 23:53:05.432730 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 46622 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:05.434690 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:05.439410 systemd-logind[1499]: New session 19 of user core. Sep 4 23:53:05.449491 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:53:05.578373 sshd[4191]: Connection closed by 10.0.0.1 port 46622 Sep 4 23:53:05.578853 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:05.583304 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:46622.service: Deactivated successfully. Sep 4 23:53:05.585743 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:53:05.586678 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:53:05.587811 systemd-logind[1499]: Removed session 19. Sep 4 23:53:10.594119 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:47562.service - OpenSSH per-connection server daemon (10.0.0.1:47562). Sep 4 23:53:10.640158 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 47562 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:10.642811 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:10.649161 systemd-logind[1499]: New session 20 of user core. Sep 4 23:53:10.656583 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:53:10.775571 sshd[4206]: Connection closed by 10.0.0.1 port 47562 Sep 4 23:53:10.776014 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:10.780615 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:47562.service: Deactivated successfully. Sep 4 23:53:10.783185 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:53:10.784157 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:53:10.785270 systemd-logind[1499]: Removed session 20. Sep 4 23:53:15.791803 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:47566.service - OpenSSH per-connection server daemon (10.0.0.1:47566). Sep 4 23:53:15.832107 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 47566 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:15.833824 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:15.837846 systemd-logind[1499]: New session 21 of user core. Sep 4 23:53:15.854486 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:53:15.967068 sshd[4225]: Connection closed by 10.0.0.1 port 47566 Sep 4 23:53:15.967500 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:15.972059 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:47566.service: Deactivated successfully. Sep 4 23:53:15.974547 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:53:15.975234 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:53:15.976224 systemd-logind[1499]: Removed session 21. Sep 4 23:53:20.986156 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:52976.service - OpenSSH per-connection server daemon (10.0.0.1:52976). Sep 4 23:53:21.032320 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 52976 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:21.034244 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:21.041138 systemd-logind[1499]: New session 22 of user core. Sep 4 23:53:21.050580 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:53:21.165148 sshd[4242]: Connection closed by 10.0.0.1 port 52976 Sep 4 23:53:21.167473 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:21.171559 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:52976.service: Deactivated successfully. Sep 4 23:53:21.174158 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:53:21.175038 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:53:21.176123 systemd-logind[1499]: Removed session 22. Sep 4 23:53:26.202808 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:52980.service - OpenSSH per-connection server daemon (10.0.0.1:52980). Sep 4 23:53:26.239213 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 52980 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:26.240830 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:26.245597 systemd-logind[1499]: New session 23 of user core. Sep 4 23:53:26.254529 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:53:26.374254 sshd[4257]: Connection closed by 10.0.0.1 port 52980 Sep 4 23:53:26.374673 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:26.383291 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:52980.service: Deactivated successfully. Sep 4 23:53:26.385397 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:53:26.386820 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:53:26.395654 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Sep 4 23:53:26.396638 systemd-logind[1499]: Removed session 23. Sep 4 23:53:26.431036 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:26.432700 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:26.437871 systemd-logind[1499]: New session 24 of user core. Sep 4 23:53:26.451684 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:53:28.262589 containerd[1516]: time="2025-09-04T23:53:28.262539691Z" level=info msg="StopContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" with timeout 30 (s)" Sep 4 23:53:28.263801 containerd[1516]: time="2025-09-04T23:53:28.263758145Z" level=info msg="Stop container \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" with signal terminated" Sep 4 23:53:28.279679 systemd[1]: cri-containerd-8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95.scope: Deactivated successfully. Sep 4 23:53:28.289356 systemd[1]: run-containerd-runc-k8s.io-cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b-runc.qiwHTK.mount: Deactivated successfully. Sep 4 23:53:28.309868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95-rootfs.mount: Deactivated successfully. Sep 4 23:53:28.313805 containerd[1516]: time="2025-09-04T23:53:28.313763475Z" level=info msg="StopContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" with timeout 2 (s)" Sep 4 23:53:28.314059 containerd[1516]: time="2025-09-04T23:53:28.314020999Z" level=info msg="Stop container \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" with signal terminated" Sep 4 23:53:28.316011 containerd[1516]: time="2025-09-04T23:53:28.315812425Z" level=info msg="shim disconnected" id=8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95 namespace=k8s.io Sep 4 23:53:28.316011 containerd[1516]: time="2025-09-04T23:53:28.315857672Z" level=warning msg="cleaning up after shim disconnected" id=8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95 namespace=k8s.io Sep 4 23:53:28.316011 containerd[1516]: time="2025-09-04T23:53:28.315866690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:28.316762 containerd[1516]: time="2025-09-04T23:53:28.316712846Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:53:28.323892 systemd-networkd[1431]: lxc_health: Link DOWN Sep 4 23:53:28.323903 systemd-networkd[1431]: lxc_health: Lost carrier Sep 4 23:53:28.340750 systemd[1]: cri-containerd-cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b.scope: Deactivated successfully. Sep 4 23:53:28.341196 systemd[1]: cri-containerd-cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b.scope: Consumed 7.168s CPU time, 125.6M memory peak, 248K read from disk, 13.3M written to disk. Sep 4 23:53:28.342510 containerd[1516]: time="2025-09-04T23:53:28.342416109Z" level=info msg="StopContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" returns successfully" Sep 4 23:53:28.346730 containerd[1516]: time="2025-09-04T23:53:28.346676091Z" level=info msg="StopPodSandbox for \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\"" Sep 4 23:53:28.350616 containerd[1516]: time="2025-09-04T23:53:28.346725187Z" level=info msg="Container to stop \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.353001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3-shm.mount: Deactivated successfully. Sep 4 23:53:28.362693 systemd[1]: cri-containerd-5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3.scope: Deactivated successfully. Sep 4 23:53:28.374863 containerd[1516]: time="2025-09-04T23:53:28.374792856Z" level=info msg="shim disconnected" id=cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b namespace=k8s.io Sep 4 23:53:28.374863 containerd[1516]: time="2025-09-04T23:53:28.374857139Z" level=warning msg="cleaning up after shim disconnected" id=cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b namespace=k8s.io Sep 4 23:53:28.374863 containerd[1516]: time="2025-09-04T23:53:28.374866667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:28.391962 containerd[1516]: time="2025-09-04T23:53:28.391738806Z" level=info msg="shim disconnected" id=5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3 namespace=k8s.io Sep 4 23:53:28.391962 containerd[1516]: time="2025-09-04T23:53:28.391796687Z" level=warning msg="cleaning up after shim disconnected" id=5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3 namespace=k8s.io Sep 4 23:53:28.391962 containerd[1516]: time="2025-09-04T23:53:28.391808981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:28.407096 containerd[1516]: time="2025-09-04T23:53:28.407045864Z" level=info msg="TearDown network for sandbox \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\" successfully" Sep 4 23:53:28.407096 containerd[1516]: time="2025-09-04T23:53:28.407086382Z" level=info msg="StopPodSandbox for \"5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3\" returns successfully" Sep 4 23:53:28.552460 containerd[1516]: time="2025-09-04T23:53:28.552412181Z" level=info msg="StopContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" returns successfully" Sep 4 23:53:28.552802 containerd[1516]: time="2025-09-04T23:53:28.552774848Z" level=info msg="StopPodSandbox for \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\"" Sep 4 23:53:28.552981 containerd[1516]: time="2025-09-04T23:53:28.552805145Z" level=info msg="Container to stop \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.552981 containerd[1516]: time="2025-09-04T23:53:28.552837709Z" level=info msg="Container to stop \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.552981 containerd[1516]: time="2025-09-04T23:53:28.552846545Z" level=info msg="Container to stop \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.552981 containerd[1516]: time="2025-09-04T23:53:28.552855563Z" level=info msg="Container to stop \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.552981 containerd[1516]: time="2025-09-04T23:53:28.552863708Z" level=info msg="Container to stop \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:53:28.559709 systemd[1]: cri-containerd-0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24.scope: Deactivated successfully. Sep 4 23:53:28.651808 containerd[1516]: time="2025-09-04T23:53:28.651669369Z" level=info msg="shim disconnected" id=0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24 namespace=k8s.io Sep 4 23:53:28.651808 containerd[1516]: time="2025-09-04T23:53:28.651755175Z" level=warning msg="cleaning up after shim disconnected" id=0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24 namespace=k8s.io Sep 4 23:53:28.651808 containerd[1516]: time="2025-09-04T23:53:28.651774281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:28.674085 kubelet[2609]: I0904 23:53:28.674008 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-cilium-config-path\") pod \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\" (UID: \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\") " Sep 4 23:53:28.674085 kubelet[2609]: I0904 23:53:28.674070 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459fv\" (UniqueName: \"kubernetes.io/projected/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-kube-api-access-459fv\") pod \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\" (UID: \"d6b5e592-27cc-4834-a1c8-0c4c5d79b21b\") " Sep 4 23:53:28.676424 containerd[1516]: time="2025-09-04T23:53:28.676300530Z" level=info msg="TearDown network for sandbox \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" successfully" Sep 4 23:53:28.676424 containerd[1516]: time="2025-09-04T23:53:28.676355686Z" level=info msg="StopPodSandbox for \"0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24\" returns successfully" Sep 4 23:53:28.696203 kubelet[2609]: I0904 23:53:28.696085 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d6b5e592-27cc-4834-a1c8-0c4c5d79b21b" (UID: "d6b5e592-27cc-4834-a1c8-0c4c5d79b21b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:53:28.699355 kubelet[2609]: I0904 23:53:28.696478 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-kube-api-access-459fv" (OuterVolumeSpecName: "kube-api-access-459fv") pod "d6b5e592-27cc-4834-a1c8-0c4c5d79b21b" (UID: "d6b5e592-27cc-4834-a1c8-0c4c5d79b21b"). InnerVolumeSpecName "kube-api-access-459fv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:53:28.777110 kubelet[2609]: I0904 23:53:28.777055 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hostproc\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777110 kubelet[2609]: I0904 23:53:28.777112 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd8c77ca-db19-45b9-adb4-0f0791d8d498-clustermesh-secrets\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777110 kubelet[2609]: I0904 23:53:28.777138 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-cgroup\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777154 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-net\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777170 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-etc-cni-netd\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777183 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-run\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777198 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-bpf-maps\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777215 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-kernel\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777381 kubelet[2609]: I0904 23:53:28.777229 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cni-path\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777221 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777247 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-xtables-lock\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777378 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hubble-tls\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777401 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-lib-modules\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777420 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8l2b\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777535 kubelet[2609]: I0904 23:53:28.777446 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-config-path\") pod \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\" (UID: \"bd8c77ca-db19-45b9-adb4-0f0791d8d498\") " Sep 4 23:53:28.777689 kubelet[2609]: I0904 23:53:28.777282 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777689 kubelet[2609]: I0904 23:53:28.777295 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777689 kubelet[2609]: I0904 23:53:28.777306 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777689 kubelet[2609]: I0904 23:53:28.777317 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777689 kubelet[2609]: I0904 23:53:28.777328 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777810 kubelet[2609]: I0904 23:53:28.777374 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777810 kubelet[2609]: I0904 23:53:28.777385 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777810 kubelet[2609]: I0904 23:53:28.777395 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.777810 kubelet[2609]: I0904 23:53:28.777501 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779622 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779656 2609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779672 2609 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779683 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779692 2609 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779700 2609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779708 2609 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.779792 kubelet[2609]: I0904 23:53:28.779717 2609 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.780150 kubelet[2609]: I0904 23:53:28.779725 2609 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.780150 kubelet[2609]: I0904 23:53:28.779734 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.780150 kubelet[2609]: I0904 23:53:28.779745 2609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-459fv\" (UniqueName: \"kubernetes.io/projected/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b-kube-api-access-459fv\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.780150 kubelet[2609]: I0904 23:53:28.779755 2609 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.780629 kubelet[2609]: I0904 23:53:28.780594 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:53:28.781252 kubelet[2609]: I0904 23:53:28.781214 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:53:28.781390 kubelet[2609]: I0904 23:53:28.781367 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b" (OuterVolumeSpecName: "kube-api-access-k8l2b") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "kube-api-access-k8l2b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:53:28.783254 kubelet[2609]: I0904 23:53:28.783205 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8c77ca-db19-45b9-adb4-0f0791d8d498-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd8c77ca-db19-45b9-adb4-0f0791d8d498" (UID: "bd8c77ca-db19-45b9-adb4-0f0791d8d498"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:53:28.880796 kubelet[2609]: I0904 23:53:28.880593 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd8c77ca-db19-45b9-adb4-0f0791d8d498-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.880796 kubelet[2609]: I0904 23:53:28.880624 2609 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd8c77ca-db19-45b9-adb4-0f0791d8d498-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.880796 kubelet[2609]: I0904 23:53:28.880635 2609 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:28.880796 kubelet[2609]: I0904 23:53:28.880644 2609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8l2b\" (UniqueName: \"kubernetes.io/projected/bd8c77ca-db19-45b9-adb4-0f0791d8d498-kube-api-access-k8l2b\") on node \"localhost\" DevicePath \"\"" Sep 4 23:53:29.284985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b-rootfs.mount: Deactivated successfully. Sep 4 23:53:29.285130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b206c39ab4c4c0974ecac79c4ecd1e29eac6d9b3cef96d8ffda6893f37d26f3-rootfs.mount: Deactivated successfully. Sep 4 23:53:29.285218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24-rootfs.mount: Deactivated successfully. Sep 4 23:53:29.285306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c02593593f67b5efead38e3b44cfb1d40ca234c190ee7514867a4b93d618c24-shm.mount: Deactivated successfully. Sep 4 23:53:29.285414 systemd[1]: var-lib-kubelet-pods-d6b5e592\x2d27cc\x2d4834\x2da1c8\x2d0c4c5d79b21b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d459fv.mount: Deactivated successfully. Sep 4 23:53:29.285494 systemd[1]: var-lib-kubelet-pods-bd8c77ca\x2ddb19\x2d45b9\x2dadb4\x2d0f0791d8d498-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8l2b.mount: Deactivated successfully. Sep 4 23:53:29.285574 systemd[1]: var-lib-kubelet-pods-bd8c77ca\x2ddb19\x2d45b9\x2dadb4\x2d0f0791d8d498-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:53:29.285657 systemd[1]: var-lib-kubelet-pods-bd8c77ca\x2ddb19\x2d45b9\x2dadb4\x2d0f0791d8d498-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:53:29.377165 kubelet[2609]: I0904 23:53:29.377120 2609 scope.go:117] "RemoveContainer" containerID="cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b" Sep 4 23:53:29.382851 systemd[1]: Removed slice kubepods-burstable-podbd8c77ca_db19_45b9_adb4_0f0791d8d498.slice - libcontainer container kubepods-burstable-podbd8c77ca_db19_45b9_adb4_0f0791d8d498.slice. Sep 4 23:53:29.383420 containerd[1516]: time="2025-09-04T23:53:29.383296641Z" level=info msg="RemoveContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\"" Sep 4 23:53:29.383774 systemd[1]: kubepods-burstable-podbd8c77ca_db19_45b9_adb4_0f0791d8d498.slice: Consumed 7.278s CPU time, 125.9M memory peak, 272K read from disk, 13.3M written to disk. Sep 4 23:53:29.387166 systemd[1]: Removed slice kubepods-besteffort-podd6b5e592_27cc_4834_a1c8_0c4c5d79b21b.slice - libcontainer container kubepods-besteffort-podd6b5e592_27cc_4834_a1c8_0c4c5d79b21b.slice. Sep 4 23:53:29.391081 containerd[1516]: time="2025-09-04T23:53:29.391046376Z" level=info msg="RemoveContainer for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" returns successfully" Sep 4 23:53:29.391890 kubelet[2609]: I0904 23:53:29.391296 2609 scope.go:117] "RemoveContainer" containerID="ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df" Sep 4 23:53:29.393113 containerd[1516]: time="2025-09-04T23:53:29.393068492Z" level=info msg="RemoveContainer for \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\"" Sep 4 23:53:29.396945 containerd[1516]: time="2025-09-04T23:53:29.396907249Z" level=info msg="RemoveContainer for \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\" returns successfully" Sep 4 23:53:29.397179 kubelet[2609]: I0904 23:53:29.397153 2609 scope.go:117] "RemoveContainer" containerID="9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845" Sep 4 23:53:29.399176 containerd[1516]: time="2025-09-04T23:53:29.399136173Z" level=info msg="RemoveContainer for \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\"" Sep 4 23:53:29.403144 containerd[1516]: time="2025-09-04T23:53:29.403102466Z" level=info msg="RemoveContainer for \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\" returns successfully" Sep 4 23:53:29.403380 kubelet[2609]: I0904 23:53:29.403354 2609 scope.go:117] "RemoveContainer" containerID="151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469" Sep 4 23:53:29.404303 containerd[1516]: time="2025-09-04T23:53:29.404264169Z" level=info msg="RemoveContainer for \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\"" Sep 4 23:53:29.408051 containerd[1516]: time="2025-09-04T23:53:29.407837877Z" level=info msg="RemoveContainer for \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\" returns successfully" Sep 4 23:53:29.408113 kubelet[2609]: I0904 23:53:29.407986 2609 scope.go:117] "RemoveContainer" containerID="173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1" Sep 4 23:53:29.408924 containerd[1516]: time="2025-09-04T23:53:29.408897654Z" level=info msg="RemoveContainer for \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\"" Sep 4 23:53:29.412210 containerd[1516]: time="2025-09-04T23:53:29.412184601Z" level=info msg="RemoveContainer for \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\" returns successfully" Sep 4 23:53:29.412394 kubelet[2609]: I0904 23:53:29.412358 2609 scope.go:117] "RemoveContainer" containerID="cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b" Sep 4 23:53:29.412549 containerd[1516]: time="2025-09-04T23:53:29.412510087Z" level=error msg="ContainerStatus for \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\": not found" Sep 4 23:53:29.417958 kubelet[2609]: E0904 23:53:29.417926 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\": not found" containerID="cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b" Sep 4 23:53:29.418052 kubelet[2609]: I0904 23:53:29.417959 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b"} err="failed to get container status \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb1fbe14d63949cbf49871257f1f5361265ccb7fe24d0b5d72fcd34273832b1b\": not found" Sep 4 23:53:29.418081 kubelet[2609]: I0904 23:53:29.418052 2609 scope.go:117] "RemoveContainer" containerID="ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df" Sep 4 23:53:29.418256 containerd[1516]: time="2025-09-04T23:53:29.418216444Z" level=error msg="ContainerStatus for \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\": not found" Sep 4 23:53:29.418368 kubelet[2609]: E0904 23:53:29.418325 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\": not found" containerID="ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df" Sep 4 23:53:29.418421 kubelet[2609]: I0904 23:53:29.418364 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df"} err="failed to get container status \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff007dd69bfd3e96ed684703356116e26ca97c4835a39b975029a0a054a691df\": not found" Sep 4 23:53:29.418421 kubelet[2609]: I0904 23:53:29.418398 2609 scope.go:117] "RemoveContainer" containerID="9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845" Sep 4 23:53:29.418556 containerd[1516]: time="2025-09-04T23:53:29.418519596Z" level=error msg="ContainerStatus for \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\": not found" Sep 4 23:53:29.418672 kubelet[2609]: E0904 23:53:29.418644 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\": not found" containerID="9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845" Sep 4 23:53:29.418711 kubelet[2609]: I0904 23:53:29.418673 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845"} err="failed to get container status \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c9b7894c63e66963e1af7d82fb90b3a276e0ec92479a3c08937a45596861845\": not found" Sep 4 23:53:29.418711 kubelet[2609]: I0904 23:53:29.418697 2609 scope.go:117] "RemoveContainer" containerID="151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469" Sep 4 23:53:29.418985 containerd[1516]: time="2025-09-04T23:53:29.418942189Z" level=error msg="ContainerStatus for \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\": not found" Sep 4 23:53:29.419095 kubelet[2609]: E0904 23:53:29.419069 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\": not found" containerID="151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469" Sep 4 23:53:29.419139 kubelet[2609]: I0904 23:53:29.419097 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469"} err="failed to get container status \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\": rpc error: code = NotFound desc = an error occurred when try to find container \"151afcef6f1d12e02341646f4336bb01c57af7502409932b941273531b8b6469\": not found" Sep 4 23:53:29.419139 kubelet[2609]: I0904 23:53:29.419116 2609 scope.go:117] "RemoveContainer" containerID="173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1" Sep 4 23:53:29.419313 containerd[1516]: time="2025-09-04T23:53:29.419282212Z" level=error msg="ContainerStatus for \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\": not found" Sep 4 23:53:29.419442 kubelet[2609]: E0904 23:53:29.419420 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\": not found" containerID="173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1" Sep 4 23:53:29.419483 kubelet[2609]: I0904 23:53:29.419439 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1"} err="failed to get container status \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"173e91278dbe5790c5cf3a5807ea114b639315b89f64a8db452118759ccb74d1\": not found" Sep 4 23:53:29.419483 kubelet[2609]: I0904 23:53:29.419453 2609 scope.go:117] "RemoveContainer" containerID="8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95" Sep 4 23:53:29.420442 containerd[1516]: time="2025-09-04T23:53:29.420406092Z" level=info msg="RemoveContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\"" Sep 4 23:53:29.423861 containerd[1516]: time="2025-09-04T23:53:29.423830744Z" level=info msg="RemoveContainer for \"8d3fff3517239323c559636198af691daed4050e81ea4342a5299ee88af6ed95\" returns successfully" Sep 4 23:53:30.023115 kubelet[2609]: I0904 23:53:30.023055 2609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8c77ca-db19-45b9-adb4-0f0791d8d498" path="/var/lib/kubelet/pods/bd8c77ca-db19-45b9-adb4-0f0791d8d498/volumes" Sep 4 23:53:30.024235 kubelet[2609]: I0904 23:53:30.024206 2609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b5e592-27cc-4834-a1c8-0c4c5d79b21b" path="/var/lib/kubelet/pods/d6b5e592-27cc-4834-a1c8-0c4c5d79b21b/volumes" Sep 4 23:53:30.109856 sshd[4272]: Connection closed by 10.0.0.1 port 52982 Sep 4 23:53:30.110508 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:30.122095 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:52982.service: Deactivated successfully. Sep 4 23:53:30.124415 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:53:30.124686 systemd[1]: session-24.scope: Consumed 1.081s CPU time, 30.4M memory peak. Sep 4 23:53:30.125272 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:53:30.139692 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:53068.service - OpenSSH per-connection server daemon (10.0.0.1:53068). Sep 4 23:53:30.140781 systemd-logind[1499]: Removed session 24. Sep 4 23:53:30.177661 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 53068 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:30.179309 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:30.183878 systemd-logind[1499]: New session 25 of user core. Sep 4 23:53:30.193450 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:53:30.682629 sshd[4436]: Connection closed by 10.0.0.1 port 53068 Sep 4 23:53:30.684072 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:30.701191 kubelet[2609]: I0904 23:53:30.700181 2609 memory_manager.go:355] "RemoveStaleState removing state" podUID="d6b5e592-27cc-4834-a1c8-0c4c5d79b21b" containerName="cilium-operator" Sep 4 23:53:30.701191 kubelet[2609]: I0904 23:53:30.700219 2609 memory_manager.go:355] "RemoveStaleState removing state" podUID="bd8c77ca-db19-45b9-adb4-0f0791d8d498" containerName="cilium-agent" Sep 4 23:53:30.702046 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:53068.service: Deactivated successfully. Sep 4 23:53:30.706239 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:53:30.710988 systemd-logind[1499]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:53:30.721848 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:53070.service - OpenSSH per-connection server daemon (10.0.0.1:53070). Sep 4 23:53:30.722923 systemd-logind[1499]: Removed session 25. Sep 4 23:53:30.737542 systemd[1]: Created slice kubepods-burstable-pod9f90e8ad_4a7e_4c79_a8db_6596bf9e1263.slice - libcontainer container kubepods-burstable-pod9f90e8ad_4a7e_4c79_a8db_6596bf9e1263.slice. Sep 4 23:53:30.760262 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 53070 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:30.761780 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:30.766717 systemd-logind[1499]: New session 26 of user core. Sep 4 23:53:30.779467 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:53:30.792376 kubelet[2609]: I0904 23:53:30.792303 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-cni-path\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792394 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktqc\" (UniqueName: \"kubernetes.io/projected/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-kube-api-access-gktqc\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792416 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-clustermesh-secrets\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792434 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-cilium-config-path\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792452 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-hostproc\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792467 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-hubble-tls\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792557 kubelet[2609]: I0904 23:53:30.792484 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-etc-cni-netd\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792752 kubelet[2609]: I0904 23:53:30.792513 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-cilium-run\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792752 kubelet[2609]: I0904 23:53:30.792585 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-lib-modules\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792752 kubelet[2609]: I0904 23:53:30.792639 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-xtables-lock\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792752 kubelet[2609]: I0904 23:53:30.792678 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-bpf-maps\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792752 kubelet[2609]: I0904 23:53:30.792714 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-cilium-cgroup\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792917 kubelet[2609]: I0904 23:53:30.792758 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-cilium-ipsec-secrets\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792917 kubelet[2609]: I0904 23:53:30.792796 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-host-proc-sys-net\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.792917 kubelet[2609]: I0904 23:53:30.792828 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f90e8ad-4a7e-4c79-a8db-6596bf9e1263-host-proc-sys-kernel\") pod \"cilium-r7bs9\" (UID: \"9f90e8ad-4a7e-4c79-a8db-6596bf9e1263\") " pod="kube-system/cilium-r7bs9" Sep 4 23:53:30.832563 sshd[4450]: Connection closed by 10.0.0.1 port 53070 Sep 4 23:53:30.832998 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:30.843480 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:53070.service: Deactivated successfully. Sep 4 23:53:30.846033 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:53:30.848413 systemd-logind[1499]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:53:30.857017 systemd[1]: Started sshd@26-10.0.0.118:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Sep 4 23:53:30.858535 systemd-logind[1499]: Removed session 26. Sep 4 23:53:30.892960 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:xyH1eYx5XTWQGc6GjNBOPeKQwD+5jNG8y5eQ+nrecEM Sep 4 23:53:30.894901 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:30.916647 systemd-logind[1499]: New session 27 of user core. Sep 4 23:53:30.926486 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:53:31.021368 kubelet[2609]: E0904 23:53:31.020789 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:31.041789 kubelet[2609]: E0904 23:53:31.041734 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:31.042449 containerd[1516]: time="2025-09-04T23:53:31.042382521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7bs9,Uid:9f90e8ad-4a7e-4c79-a8db-6596bf9e1263,Namespace:kube-system,Attempt:0,}" Sep 4 23:53:31.068562 containerd[1516]: time="2025-09-04T23:53:31.068450249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:53:31.068562 containerd[1516]: time="2025-09-04T23:53:31.068531044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:53:31.068727 containerd[1516]: time="2025-09-04T23:53:31.068545993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:53:31.069667 containerd[1516]: time="2025-09-04T23:53:31.069597321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:53:31.090533 systemd[1]: Started cri-containerd-ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b.scope - libcontainer container ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b. Sep 4 23:53:31.117823 containerd[1516]: time="2025-09-04T23:53:31.117728009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7bs9,Uid:9f90e8ad-4a7e-4c79-a8db-6596bf9e1263,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\"" Sep 4 23:53:31.118864 kubelet[2609]: E0904 23:53:31.118820 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:31.121655 containerd[1516]: time="2025-09-04T23:53:31.121318770Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:53:31.135956 containerd[1516]: time="2025-09-04T23:53:31.135887305Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027\"" Sep 4 23:53:31.136479 containerd[1516]: time="2025-09-04T23:53:31.136454103Z" level=info msg="StartContainer for \"917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027\"" Sep 4 23:53:31.169565 systemd[1]: Started cri-containerd-917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027.scope - libcontainer container 917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027. Sep 4 23:53:31.197960 containerd[1516]: time="2025-09-04T23:53:31.197916371Z" level=info msg="StartContainer for \"917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027\" returns successfully" Sep 4 23:53:31.208252 systemd[1]: cri-containerd-917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027.scope: Deactivated successfully. Sep 4 23:53:31.241462 containerd[1516]: time="2025-09-04T23:53:31.241393215Z" level=info msg="shim disconnected" id=917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027 namespace=k8s.io Sep 4 23:53:31.241462 containerd[1516]: time="2025-09-04T23:53:31.241454242Z" level=warning msg="cleaning up after shim disconnected" id=917281b41dee68cfc5537a89a21562676e61b059147cca4458e066c440784027 namespace=k8s.io Sep 4 23:53:31.241462 containerd[1516]: time="2025-09-04T23:53:31.241463600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:31.387203 kubelet[2609]: E0904 23:53:31.387042 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:31.389595 containerd[1516]: time="2025-09-04T23:53:31.389546447Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:53:31.401859 containerd[1516]: time="2025-09-04T23:53:31.401801971Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f\"" Sep 4 23:53:31.403751 containerd[1516]: time="2025-09-04T23:53:31.402626605Z" level=info msg="StartContainer for \"b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f\"" Sep 4 23:53:31.444584 systemd[1]: Started cri-containerd-b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f.scope - libcontainer container b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f. Sep 4 23:53:31.474928 containerd[1516]: time="2025-09-04T23:53:31.474869104Z" level=info msg="StartContainer for \"b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f\" returns successfully" Sep 4 23:53:31.484848 systemd[1]: cri-containerd-b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f.scope: Deactivated successfully. Sep 4 23:53:31.516996 containerd[1516]: time="2025-09-04T23:53:31.516920273Z" level=info msg="shim disconnected" id=b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f namespace=k8s.io Sep 4 23:53:31.516996 containerd[1516]: time="2025-09-04T23:53:31.516972493Z" level=warning msg="cleaning up after shim disconnected" id=b847deb6e2a9528ea939f710218c7536bcb98390cb83685625b3ed6f34fb379f namespace=k8s.io Sep 4 23:53:31.516996 containerd[1516]: time="2025-09-04T23:53:31.516984164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:32.089227 kubelet[2609]: E0904 23:53:32.089152 2609 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:53:32.390842 kubelet[2609]: E0904 23:53:32.390699 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:32.392803 containerd[1516]: time="2025-09-04T23:53:32.392768413Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:53:32.409408 containerd[1516]: time="2025-09-04T23:53:32.409356325Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594\"" Sep 4 23:53:32.410076 containerd[1516]: time="2025-09-04T23:53:32.410031289Z" level=info msg="StartContainer for \"4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594\"" Sep 4 23:53:32.446553 systemd[1]: Started cri-containerd-4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594.scope - libcontainer container 4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594. Sep 4 23:53:32.482129 containerd[1516]: time="2025-09-04T23:53:32.482066389Z" level=info msg="StartContainer for \"4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594\" returns successfully" Sep 4 23:53:32.484880 systemd[1]: cri-containerd-4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594.scope: Deactivated successfully. Sep 4 23:53:32.516202 containerd[1516]: time="2025-09-04T23:53:32.516119926Z" level=info msg="shim disconnected" id=4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594 namespace=k8s.io Sep 4 23:53:32.516202 containerd[1516]: time="2025-09-04T23:53:32.516195511Z" level=warning msg="cleaning up after shim disconnected" id=4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594 namespace=k8s.io Sep 4 23:53:32.516202 containerd[1516]: time="2025-09-04T23:53:32.516209388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:32.899782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d15f9906638b7f3a6ce3143880d18e3c4721a8b884732b2d2df22e5e5bc0594-rootfs.mount: Deactivated successfully. Sep 4 23:53:33.395060 kubelet[2609]: E0904 23:53:33.395016 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:33.396912 containerd[1516]: time="2025-09-04T23:53:33.396790963Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:53:33.412789 containerd[1516]: time="2025-09-04T23:53:33.412724662Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366\"" Sep 4 23:53:33.413861 containerd[1516]: time="2025-09-04T23:53:33.413408153Z" level=info msg="StartContainer for \"f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366\"" Sep 4 23:53:33.445499 systemd[1]: Started cri-containerd-f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366.scope - libcontainer container f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366. Sep 4 23:53:33.473078 systemd[1]: cri-containerd-f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366.scope: Deactivated successfully. Sep 4 23:53:33.475192 containerd[1516]: time="2025-09-04T23:53:33.475126711Z" level=info msg="StartContainer for \"f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366\" returns successfully" Sep 4 23:53:33.499609 containerd[1516]: time="2025-09-04T23:53:33.499526101Z" level=info msg="shim disconnected" id=f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366 namespace=k8s.io Sep 4 23:53:33.499609 containerd[1516]: time="2025-09-04T23:53:33.499581287Z" level=warning msg="cleaning up after shim disconnected" id=f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366 namespace=k8s.io Sep 4 23:53:33.499609 containerd[1516]: time="2025-09-04T23:53:33.499589974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:53:33.899700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f952c21bf97100dd22d213022e2c9637017d5047c38d1c0a6b8bcf6749fae366-rootfs.mount: Deactivated successfully. Sep 4 23:53:34.192380 kubelet[2609]: I0904 23:53:34.192182 2609 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:53:34Z","lastTransitionTime":"2025-09-04T23:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:53:34.398578 kubelet[2609]: E0904 23:53:34.398541 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:34.400038 containerd[1516]: time="2025-09-04T23:53:34.399985531Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:53:34.419409 containerd[1516]: time="2025-09-04T23:53:34.419367870Z" level=info msg="CreateContainer within sandbox \"ba21da5e6183c72ba4bfba8b47d9246f04c1af4e6d98db26f955fbe47ab2268b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4da95f088154cf54b3321dd9b38ae528cd2273432bc14cd8d30bbc98718e48d7\"" Sep 4 23:53:34.419880 containerd[1516]: time="2025-09-04T23:53:34.419845025Z" level=info msg="StartContainer for \"4da95f088154cf54b3321dd9b38ae528cd2273432bc14cd8d30bbc98718e48d7\"" Sep 4 23:53:34.452475 systemd[1]: Started cri-containerd-4da95f088154cf54b3321dd9b38ae528cd2273432bc14cd8d30bbc98718e48d7.scope - libcontainer container 4da95f088154cf54b3321dd9b38ae528cd2273432bc14cd8d30bbc98718e48d7. Sep 4 23:53:34.487243 containerd[1516]: time="2025-09-04T23:53:34.487186582Z" level=info msg="StartContainer for \"4da95f088154cf54b3321dd9b38ae528cd2273432bc14cd8d30bbc98718e48d7\" returns successfully" Sep 4 23:53:34.943394 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:53:35.020725 kubelet[2609]: E0904 23:53:35.020682 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:35.020851 kubelet[2609]: E0904 23:53:35.020741 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:35.402854 kubelet[2609]: E0904 23:53:35.402817 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:37.042760 kubelet[2609]: E0904 23:53:37.042673 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:38.056861 systemd-networkd[1431]: lxc_health: Link UP Sep 4 23:53:38.065572 systemd-networkd[1431]: lxc_health: Gained carrier Sep 4 23:53:39.043666 kubelet[2609]: E0904 23:53:39.043619 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:39.059771 kubelet[2609]: I0904 23:53:39.059676 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r7bs9" podStartSLOduration=9.059653455 podStartE2EDuration="9.059653455s" podCreationTimestamp="2025-09-04 23:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:53:35.416907873 +0000 UTC m=+83.492270811" watchObservedRunningTime="2025-09-04 23:53:39.059653455 +0000 UTC m=+87.135016373" Sep 4 23:53:39.409768 kubelet[2609]: E0904 23:53:39.409627 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:39.803632 systemd-networkd[1431]: lxc_health: Gained IPv6LL Sep 4 23:53:40.022102 kubelet[2609]: E0904 23:53:40.020988 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:40.411321 kubelet[2609]: E0904 23:53:40.411289 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:45.677011 sshd[4463]: Connection closed by 10.0.0.1 port 53072 Sep 4 23:53:45.678056 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:45.681889 systemd[1]: sshd@26-10.0.0.118:22-10.0.0.1:53072.service: Deactivated successfully. Sep 4 23:53:45.684135 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:53:45.684861 systemd-logind[1499]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:53:45.685707 systemd-logind[1499]: Removed session 27.