Sep 11 00:18:18.483771 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:18:18.483818 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:18:18.483834 kernel: BIOS-provided physical RAM map: Sep 11 00:18:18.483843 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:18:18.483852 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:18:18.483861 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:18:18.483872 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:18:18.483882 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:18:18.483901 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:18:18.483911 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:18:18.483921 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 11 00:18:18.483931 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:18:18.483941 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:18:18.483953 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:18:18.484081 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:18:18.484092 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:18:18.484105 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:18:18.484114 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:18:18.484124 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:18:18.484133 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:18:18.484143 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:18:18.484152 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:18:18.484162 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:18:18.484172 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:18:18.484182 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:18:18.484197 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:18:18.484207 kernel: NX (Execute Disable) protection: active Sep 11 00:18:18.484217 kernel: APIC: Static calls initialized Sep 11 00:18:18.484228 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 11 00:18:18.484239 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 11 00:18:18.484249 kernel: extended physical RAM map: Sep 11 00:18:18.484260 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:18:18.484270 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:18:18.484280 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:18:18.484291 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:18:18.484301 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:18:18.484317 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:18:18.484327 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:18:18.484338 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 11 00:18:18.484349 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 11 00:18:18.484366 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 11 00:18:18.484377 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 11 00:18:18.484392 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 11 00:18:18.484404 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:18:18.484415 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:18:18.484426 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:18:18.486809 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:18:18.486829 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:18:18.486839 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:18:18.486849 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:18:18.486859 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:18:18.486877 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:18:18.486887 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:18:18.486897 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:18:18.486906 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:18:18.486916 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:18:18.486926 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:18:18.486936 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:18:18.486953 kernel: efi: EFI v2.7 by EDK II Sep 11 00:18:18.486980 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 11 00:18:18.486990 kernel: random: crng init done Sep 11 00:18:18.487004 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 11 00:18:18.487015 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 11 00:18:18.487032 kernel: secureboot: Secure boot disabled Sep 11 00:18:18.487042 kernel: SMBIOS 2.8 present. Sep 11 00:18:18.487053 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:18:18.487063 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:18:18.487073 kernel: Hypervisor detected: KVM Sep 11 00:18:18.487083 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:18:18.487093 kernel: kvm-clock: using sched offset of 11981691835 cycles Sep 11 00:18:18.487104 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:18:18.487115 kernel: tsc: Detected 2794.748 MHz processor Sep 11 00:18:18.487126 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:18:18.487140 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:18:18.487151 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 11 00:18:18.487162 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:18:18.487173 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:18:18.487182 kernel: Using GB pages for direct mapping Sep 11 00:18:18.487192 kernel: ACPI: Early table checksum verification disabled Sep 11 00:18:18.487202 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 11 00:18:18.487214 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:18:18.487224 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487238 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487248 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 11 00:18:18.487259 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487270 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487286 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487296 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:18:18.487307 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:18:18.487318 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 11 00:18:18.487329 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 11 00:18:18.487343 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 11 00:18:18.487353 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 11 00:18:18.487364 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 11 00:18:18.487375 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 11 00:18:18.487385 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 11 00:18:18.487396 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 11 00:18:18.487406 kernel: No NUMA configuration found Sep 11 00:18:18.487417 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 11 00:18:18.487427 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 11 00:18:18.487461 kernel: Zone ranges: Sep 11 00:18:18.487472 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:18:18.487483 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 11 00:18:18.487493 kernel: Normal empty Sep 11 00:18:18.487504 kernel: Device empty Sep 11 00:18:18.487514 kernel: Movable zone start for each node Sep 11 00:18:18.487525 kernel: Early memory node ranges Sep 11 00:18:18.487536 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:18:18.487546 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 11 00:18:18.487560 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 11 00:18:18.487574 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 11 00:18:18.487585 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 11 00:18:18.487596 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 11 00:18:18.487606 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 11 00:18:18.487617 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 11 00:18:18.487628 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 11 00:18:18.487641 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:18:18.487652 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:18:18.487675 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 11 00:18:18.487686 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:18:18.487697 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 11 00:18:18.487708 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 11 00:18:18.487723 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:18:18.487734 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:18:18.487745 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 11 00:18:18.487756 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:18:18.487767 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:18:18.487781 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:18:18.487792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:18:18.487804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:18:18.487815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:18:18.487826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:18:18.487837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:18:18.487848 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:18:18.487860 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:18:18.487874 kernel: TSC deadline timer available Sep 11 00:18:18.487885 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:18:18.487896 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:18:18.487907 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:18:18.487918 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:18:18.487928 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:18:18.487939 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:18:18.487950 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:18:18.487978 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:18:18.487990 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:18:18.488005 kernel: kvm-guest: setup PV sched yield Sep 11 00:18:18.488017 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:18:18.488028 kernel: Booting paravirtualized kernel on KVM Sep 11 00:18:18.488039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:18:18.488051 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:18:18.488062 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:18:18.488074 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:18:18.488085 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:18:18.488096 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:18:18.488110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:18:18.488123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:18:18.488139 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:18:18.488150 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:18:18.488162 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:18:18.488173 kernel: Fallback order for Node 0: 0 Sep 11 00:18:18.488184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 11 00:18:18.488195 kernel: Policy zone: DMA32 Sep 11 00:18:18.488210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:18:18.488221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:18:18.488232 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:18:18.488243 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:18:18.488254 kernel: Dynamic Preempt: voluntary Sep 11 00:18:18.488265 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:18:18.488278 kernel: rcu: RCU event tracing is enabled. Sep 11 00:18:18.488290 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:18:18.488302 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:18:18.488316 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:18:18.488327 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:18:18.488338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:18:18.488353 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:18:18.488365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:18:18.488376 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:18:18.488387 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:18:18.488397 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:18:18.488408 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:18:18.488423 kernel: Console: colour dummy device 80x25 Sep 11 00:18:18.488445 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:18:18.488456 kernel: ACPI: Core revision 20240827 Sep 11 00:18:18.488467 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:18:18.488477 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:18:18.488488 kernel: x2apic enabled Sep 11 00:18:18.488499 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:18:18.488510 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:18:18.488522 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:18:18.488538 kernel: kvm-guest: setup PV IPIs Sep 11 00:18:18.488550 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:18:18.488562 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:18:18.488574 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 11 00:18:18.488586 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:18:18.488598 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:18:18.488610 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:18:18.488622 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:18:18.488633 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:18:18.488650 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:18:18.488662 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:18:18.488673 kernel: active return thunk: retbleed_return_thunk Sep 11 00:18:18.488684 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:18:18.488701 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:18:18.488713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:18:18.488724 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:18:18.488743 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:18:18.488761 kernel: active return thunk: srso_return_thunk Sep 11 00:18:18.488773 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:18:18.488785 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:18:18.488796 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:18:18.488808 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:18:18.488820 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:18:18.488831 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:18:18.488843 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:18:18.488855 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:18:18.488872 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:18:18.488884 kernel: landlock: Up and running. Sep 11 00:18:18.488894 kernel: SELinux: Initializing. Sep 11 00:18:18.488904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:18:18.488915 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:18:18.488927 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:18:18.488939 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:18:18.488952 kernel: ... version: 0 Sep 11 00:18:18.488990 kernel: ... bit width: 48 Sep 11 00:18:18.489009 kernel: ... generic registers: 6 Sep 11 00:18:18.489020 kernel: ... value mask: 0000ffffffffffff Sep 11 00:18:18.489032 kernel: ... max period: 00007fffffffffff Sep 11 00:18:18.489043 kernel: ... fixed-purpose events: 0 Sep 11 00:18:18.489054 kernel: ... event mask: 000000000000003f Sep 11 00:18:18.489064 kernel: signal: max sigframe size: 1776 Sep 11 00:18:18.489076 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:18:18.489094 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:18:18.489107 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:18:18.489123 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:18:18.489135 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:18:18.489146 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:18:18.489158 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:18:18.489170 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 11 00:18:18.489182 kernel: Memory: 2424728K/2565800K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 11 00:18:18.489194 kernel: devtmpfs: initialized Sep 11 00:18:18.489206 kernel: x86/mm: Memory block size: 128MB Sep 11 00:18:18.489218 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 11 00:18:18.489235 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 11 00:18:18.489248 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 11 00:18:18.489261 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 11 00:18:18.489273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 11 00:18:18.489285 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 11 00:18:18.489296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:18:18.489308 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:18:18.489319 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:18:18.489331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:18:18.489348 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:18:18.489360 kernel: audit: type=2000 audit(1757549891.486:1): state=initialized audit_enabled=0 res=1 Sep 11 00:18:18.489373 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:18:18.489385 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:18:18.489397 kernel: cpuidle: using governor menu Sep 11 00:18:18.489409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:18:18.489421 kernel: dca service started, version 1.12.1 Sep 11 00:18:18.489443 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:18:18.489455 kernel: PCI: Using configuration type 1 for base access Sep 11 00:18:18.489471 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:18:18.489482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:18:18.489492 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:18:18.489503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:18:18.489512 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:18:18.489523 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:18:18.489533 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:18:18.489543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:18:18.489553 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:18:18.489567 kernel: ACPI: Interpreter enabled Sep 11 00:18:18.489577 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:18:18.489587 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:18:18.489598 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:18:18.489608 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:18:18.489619 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:18:18.489629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:18:18.490011 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:18:18.490207 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:18:18.490380 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:18:18.490399 kernel: PCI host bridge to bus 0000:00 Sep 11 00:18:18.490597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:18:18.490747 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:18:18.490905 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:18:18.491071 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:18:18.491221 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:18:18.491371 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:18:18.491529 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:18:18.491727 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:18:18.491912 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:18:18.492097 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:18:18.492262 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:18:18.492419 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:18:18.492589 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:18:18.492780 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:18:18.492942 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:18:18.494044 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:18:18.494251 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:18:18.494474 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:18:18.494617 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:18:18.494747 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:18:18.494883 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:18:18.495076 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:18:18.495206 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:18:18.495343 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:18:18.495485 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:18:18.495637 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:18:18.495815 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:18:18.496007 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:18:18.496210 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:18:18.496365 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:18:18.496529 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:18:18.496677 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:18:18.496818 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:18:18.496833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:18:18.496845 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:18:18.496856 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:18:18.496867 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:18:18.496878 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:18:18.496893 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:18:18.496904 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:18:18.496915 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:18:18.496926 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:18:18.496937 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:18:18.496948 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:18:18.496976 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:18:18.496988 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:18:18.496998 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:18:18.497013 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:18:18.497023 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:18:18.497034 kernel: iommu: Default domain type: Translated Sep 11 00:18:18.497044 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:18:18.497055 kernel: efivars: Registered efivars operations Sep 11 00:18:18.497065 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:18:18.497076 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:18:18.497088 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 11 00:18:18.497098 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 11 00:18:18.497111 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 11 00:18:18.497122 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 11 00:18:18.497132 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 11 00:18:18.497142 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 11 00:18:18.497153 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 11 00:18:18.497164 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 11 00:18:18.497321 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:18:18.497489 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:18:18.498400 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:18:18.498424 kernel: vgaarb: loaded Sep 11 00:18:18.498448 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:18:18.498460 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:18:18.498470 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:18:18.498481 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:18:18.498492 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:18:18.498502 kernel: pnp: PnP ACPI init Sep 11 00:18:18.498729 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:18:18.498755 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:18:18.498768 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:18:18.498780 kernel: NET: Registered PF_INET protocol family Sep 11 00:18:18.498792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:18:18.498804 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:18:18.498816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:18:18.498828 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:18:18.498840 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:18:18.498857 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:18:18.498872 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:18:18.498887 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:18:18.498902 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:18:18.498916 kernel: NET: Registered PF_XDP protocol family Sep 11 00:18:18.499136 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:18:18.499291 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:18:18.499450 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:18:18.499595 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:18:18.499729 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:18:18.499862 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:18:18.500024 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:18:18.500187 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:18:18.500202 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:18:18.500215 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:18:18.500227 kernel: Initialise system trusted keyrings Sep 11 00:18:18.500243 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:18:18.500255 kernel: Key type asymmetric registered Sep 11 00:18:18.500267 kernel: Asymmetric key parser 'x509' registered Sep 11 00:18:18.500279 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:18:18.500291 kernel: io scheduler mq-deadline registered Sep 11 00:18:18.500303 kernel: io scheduler kyber registered Sep 11 00:18:18.500317 kernel: io scheduler bfq registered Sep 11 00:18:18.500329 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:18:18.500341 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:18:18.500354 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:18:18.500366 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:18:18.500378 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:18:18.500390 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:18:18.500402 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:18:18.500414 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:18:18.500428 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:18:18.500607 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:18:18.500625 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:18:18.500767 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:18:18.500907 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:18:17 UTC (1757549897) Sep 11 00:18:18.501066 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:18:18.501083 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:18:18.501095 kernel: efifb: probing for efifb Sep 11 00:18:18.501111 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:18:18.501141 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:18:18.501153 kernel: efifb: scrolling: redraw Sep 11 00:18:18.501165 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:18:18.501177 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:18:18.501189 kernel: fb0: EFI VGA frame buffer device Sep 11 00:18:18.501201 kernel: pstore: Using crash dump compression: deflate Sep 11 00:18:18.501213 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:18:18.501225 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:18:18.501239 kernel: Segment Routing with IPv6 Sep 11 00:18:18.501251 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:18:18.501262 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:18:18.501274 kernel: Key type dns_resolver registered Sep 11 00:18:18.501286 kernel: IPI shorthand broadcast: enabled Sep 11 00:18:18.501298 kernel: sched_clock: Marking stable (7959007629, 202769207)->(8317459425, -155682589) Sep 11 00:18:18.501309 kernel: registered taskstats version 1 Sep 11 00:18:18.501321 kernel: Loading compiled-in X.509 certificates Sep 11 00:18:18.501333 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:18:18.501347 kernel: Demotion targets for Node 0: null Sep 11 00:18:18.501359 kernel: Key type .fscrypt registered Sep 11 00:18:18.501371 kernel: Key type fscrypt-provisioning registered Sep 11 00:18:18.501382 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:18:18.501395 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:18:18.501407 kernel: ima: No architecture policies found Sep 11 00:18:18.501418 kernel: clk: Disabling unused clocks Sep 11 00:18:18.501430 kernel: Warning: unable to open an initial console. Sep 11 00:18:18.501457 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:18:18.501473 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:18:18.501485 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:18:18.501497 kernel: Run /init as init process Sep 11 00:18:18.501511 kernel: with arguments: Sep 11 00:18:18.501523 kernel: /init Sep 11 00:18:18.501535 kernel: with environment: Sep 11 00:18:18.501547 kernel: HOME=/ Sep 11 00:18:18.501561 kernel: TERM=linux Sep 11 00:18:18.501573 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:18:18.501593 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:18:18.501609 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:18:18.501623 systemd[1]: Detected virtualization kvm. Sep 11 00:18:18.501635 systemd[1]: Detected architecture x86-64. Sep 11 00:18:18.501647 systemd[1]: Running in initrd. Sep 11 00:18:18.501659 systemd[1]: No hostname configured, using default hostname. Sep 11 00:18:18.501672 systemd[1]: Hostname set to . Sep 11 00:18:18.501687 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:18:18.501699 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:18:18.501711 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:18:18.501724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:18:18.501738 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:18:18.501751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:18:18.501764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:18:18.501780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:18:18.501794 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:18:18.501807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:18:18.501820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:18:18.501833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:18:18.501846 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:18:18.501858 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:18:18.501871 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:18:18.501885 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:18:18.501898 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:18:18.501911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:18:18.501924 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:18:18.501936 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:18:18.501949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:18:18.501989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:18:18.502003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:18:18.502016 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:18:18.502031 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:18:18.502044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:18:18.502057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:18:18.502070 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:18:18.502083 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:18:18.502095 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:18:18.502108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:18:18.502120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:18:18.502135 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:18:18.502149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:18:18.502162 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:18:18.502174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:18:18.502225 systemd-journald[224]: Collecting audit messages is disabled. Sep 11 00:18:18.502259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:18:18.502272 systemd-journald[224]: Journal started Sep 11 00:18:18.502304 systemd-journald[224]: Runtime Journal (/run/log/journal/3ce07945723646b0857e2d9f68259f3c) is 6M, max 48.5M, 42.4M free. Sep 11 00:18:18.479383 systemd-modules-load[225]: Inserted module 'overlay' Sep 11 00:18:18.507086 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:18:18.509169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:18:18.521719 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:18:18.523012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:18:18.533499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:18:18.536921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:18:18.541792 kernel: Bridge firewalling registered Sep 11 00:18:18.537482 systemd-modules-load[225]: Inserted module 'br_netfilter' Sep 11 00:18:18.539286 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:18:18.548134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:18:18.557933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:18:18.566372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:18:18.566452 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:18:18.572790 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:18:18.575390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:18:18.582409 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:18:18.585764 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:18:18.619017 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:18:18.656554 systemd-resolved[262]: Positive Trust Anchors: Sep 11 00:18:18.656591 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:18:18.656631 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:18:18.660297 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 11 00:18:18.662215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:18:18.670598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:18:18.788030 kernel: SCSI subsystem initialized Sep 11 00:18:18.803010 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:18:18.834603 kernel: iscsi: registered transport (tcp) Sep 11 00:18:18.869049 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:18:18.869168 kernel: QLogic iSCSI HBA Driver Sep 11 00:18:18.930809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:18:18.983397 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:18:18.989789 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:18:19.134857 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:18:19.150183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:18:19.253469 kernel: raid6: avx2x4 gen() 18868 MB/s Sep 11 00:18:19.273463 kernel: raid6: avx2x2 gen() 19079 MB/s Sep 11 00:18:19.290469 kernel: raid6: avx2x1 gen() 14737 MB/s Sep 11 00:18:19.290565 kernel: raid6: using algorithm avx2x2 gen() 19079 MB/s Sep 11 00:18:19.314498 kernel: raid6: .... xor() 13406 MB/s, rmw enabled Sep 11 00:18:19.314621 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:18:19.361457 kernel: xor: automatically using best checksumming function avx Sep 11 00:18:19.812419 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:18:19.838715 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:18:19.845569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:18:19.906732 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 11 00:18:19.920357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:18:19.940557 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:18:19.975806 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 11 00:18:20.036511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:18:20.045243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:18:20.211098 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:18:20.220179 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:18:20.340021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:18:20.340260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:18:20.344646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:18:20.348858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:18:20.351195 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:18:20.389022 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:18:20.394008 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:18:20.399298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:18:20.399364 kernel: GPT:9289727 != 19775487 Sep 11 00:18:20.399388 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:18:20.399404 kernel: GPT:9289727 != 19775487 Sep 11 00:18:20.399417 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:18:20.400983 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:18:20.404472 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:18:20.407451 kernel: libata version 3.00 loaded. Sep 11 00:18:20.424989 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:18:20.434822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:18:20.434892 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:18:20.435198 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:18:20.435735 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:18:20.438674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:18:20.440640 kernel: scsi host0: ahci Sep 11 00:18:20.454999 kernel: scsi host1: ahci Sep 11 00:18:20.487005 kernel: scsi host2: ahci Sep 11 00:18:20.497018 kernel: AES CTR mode by8 optimization enabled Sep 11 00:18:20.499225 kernel: scsi host3: ahci Sep 11 00:18:20.504470 kernel: scsi host4: ahci Sep 11 00:18:20.510797 kernel: scsi host5: ahci Sep 11 00:18:20.511133 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 11 00:18:20.511155 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 11 00:18:20.511172 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 11 00:18:20.511188 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 11 00:18:20.517072 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 11 00:18:20.517143 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 11 00:18:20.541614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:18:20.562949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:18:20.592315 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:18:20.613582 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 11 00:18:20.603630 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:18:20.623513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:18:20.628883 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:18:20.825039 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:18:20.833907 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:18:20.834036 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:18:20.834057 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:18:20.835129 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:18:20.837782 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:18:20.837828 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:18:20.837843 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:18:20.838632 kernel: ata3.00: applying bridge limits Sep 11 00:18:20.841459 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:18:20.841533 kernel: ata3.00: configured for UDMA/100 Sep 11 00:18:20.846029 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:18:20.971561 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:18:20.972087 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:18:20.990019 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:18:21.029138 disk-uuid[630]: Primary Header is updated. Sep 11 00:18:21.029138 disk-uuid[630]: Secondary Entries is updated. Sep 11 00:18:21.029138 disk-uuid[630]: Secondary Header is updated. Sep 11 00:18:21.038574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:18:21.051524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:18:21.666298 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:18:21.671691 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:18:21.683738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:18:21.700033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:18:21.713033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:18:21.757581 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:18:22.057956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:18:22.062396 disk-uuid[643]: The operation has completed successfully. Sep 11 00:18:22.152165 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:18:22.153438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:18:22.203955 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:18:22.247834 sh[671]: Success Sep 11 00:18:22.296154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:18:22.296271 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:18:22.296291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:18:22.350118 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:18:22.431003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:18:22.453211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:18:22.461802 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:18:22.487070 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (683) Sep 11 00:18:22.492577 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:18:22.492674 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:18:22.507358 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:18:22.507459 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:18:22.514726 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:18:22.516616 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:18:22.518456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:18:22.519648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:18:22.543232 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:18:22.590041 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 11 00:18:22.600916 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:18:22.601020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:18:22.630032 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:18:22.630116 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:18:22.648188 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:18:22.681838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:18:22.691562 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:18:23.051730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:18:23.059157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:18:23.166007 ignition[769]: Ignition 2.21.0 Sep 11 00:18:23.166026 ignition[769]: Stage: fetch-offline Sep 11 00:18:23.166076 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:23.166089 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:23.166207 ignition[769]: parsed url from cmdline: "" Sep 11 00:18:23.166212 ignition[769]: no config URL provided Sep 11 00:18:23.166219 ignition[769]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:18:23.166235 ignition[769]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:18:23.166270 ignition[769]: op(1): [started] loading QEMU firmware config module Sep 11 00:18:23.168077 ignition[769]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:18:23.208079 ignition[769]: op(1): [finished] loading QEMU firmware config module Sep 11 00:18:23.222855 systemd-networkd[859]: lo: Link UP Sep 11 00:18:23.222874 systemd-networkd[859]: lo: Gained carrier Sep 11 00:18:23.226108 systemd-networkd[859]: Enumeration completed Sep 11 00:18:23.226692 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:18:23.226698 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:18:23.227206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:18:23.231033 systemd-networkd[859]: eth0: Link UP Sep 11 00:18:23.231257 systemd-networkd[859]: eth0: Gained carrier Sep 11 00:18:23.231276 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:18:23.276124 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:18:23.300555 systemd[1]: Reached target network.target - Network. Sep 11 00:18:23.311815 unknown[769]: fetched base config from "system" Sep 11 00:18:23.304170 ignition[769]: parsing config with SHA512: 7ab57db702e8b9d3404d73d38bdd089c2f6008eefe0671316f602f5d9df97cd2164f7ef36d3ed094fc72526092af3cfd08e573ba75fe067a54233051bf672d1a Sep 11 00:18:23.311827 unknown[769]: fetched user config from "qemu" Sep 11 00:18:23.312442 ignition[769]: fetch-offline: fetch-offline passed Sep 11 00:18:23.312536 ignition[769]: Ignition finished successfully Sep 11 00:18:23.317797 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:18:23.325937 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:18:23.328485 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:18:23.825677 ignition[866]: Ignition 2.21.0 Sep 11 00:18:23.825699 ignition[866]: Stage: kargs Sep 11 00:18:23.825896 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:23.825915 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:23.867623 ignition[866]: kargs: kargs passed Sep 11 00:18:23.867756 ignition[866]: Ignition finished successfully Sep 11 00:18:23.885084 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:18:23.890001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:18:24.231512 ignition[874]: Ignition 2.21.0 Sep 11 00:18:24.231540 ignition[874]: Stage: disks Sep 11 00:18:24.237518 ignition[874]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:24.237544 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:24.239102 ignition[874]: disks: disks passed Sep 11 00:18:24.239183 ignition[874]: Ignition finished successfully Sep 11 00:18:24.266936 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:18:24.274131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:18:24.276746 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:18:24.295511 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:18:24.303496 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:18:24.307146 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:18:24.317510 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:18:24.366313 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:18:24.377326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:18:24.388431 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:18:24.728013 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:18:24.731598 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:18:24.733706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:18:24.799098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:18:24.810047 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:18:24.812343 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:18:24.812400 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:18:24.832749 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Sep 11 00:18:24.832780 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:18:24.832793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:18:24.812432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:18:24.844969 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:18:24.845037 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:18:24.846544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:18:24.855090 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:18:24.858542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:18:25.039025 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:18:25.049993 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:18:25.061856 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:18:25.072921 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:18:25.215182 systemd-networkd[859]: eth0: Gained IPv6LL Sep 11 00:18:25.333569 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:18:25.346620 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:18:25.375951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:18:25.393094 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:18:25.393033 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:18:25.714908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:18:25.723691 ignition[1004]: INFO : Ignition 2.21.0 Sep 11 00:18:25.723691 ignition[1004]: INFO : Stage: mount Sep 11 00:18:25.725812 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:25.725812 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:25.735475 ignition[1004]: INFO : mount: mount passed Sep 11 00:18:25.736550 ignition[1004]: INFO : Ignition finished successfully Sep 11 00:18:25.743900 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:18:25.749842 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:18:25.781462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:18:25.810590 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Sep 11 00:18:25.816236 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:18:25.816314 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:18:25.837291 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:18:25.837381 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:18:25.841004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:18:25.902461 ignition[1036]: INFO : Ignition 2.21.0 Sep 11 00:18:25.902461 ignition[1036]: INFO : Stage: files Sep 11 00:18:25.904729 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:25.904729 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:25.907582 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:18:25.911902 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:18:25.911902 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:18:25.922426 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:18:25.924212 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:18:25.926281 unknown[1036]: wrote ssh authorized keys file for user: core Sep 11 00:18:25.929358 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:18:25.935518 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:18:25.938047 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 11 00:18:26.010371 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:18:26.364637 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:18:26.364637 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:18:26.369243 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 11 00:18:26.491931 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 11 00:18:27.301814 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:18:27.301814 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:18:27.311915 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:18:27.337880 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 11 00:18:27.705324 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 11 00:18:30.598060 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:18:30.598060 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:18:30.604370 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:18:30.620193 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:18:30.620193 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:18:30.620193 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 00:18:30.620193 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:18:30.631473 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:18:30.631473 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 00:18:30.631473 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:18:30.685304 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:18:30.696461 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:18:30.699557 ignition[1036]: INFO : files: files passed Sep 11 00:18:30.699557 ignition[1036]: INFO : Ignition finished successfully Sep 11 00:18:30.709397 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:18:30.715217 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:18:30.718461 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:18:30.747322 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:18:30.747535 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:18:30.754640 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:18:30.765285 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:18:30.768058 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:18:30.768058 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:18:30.781602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:18:30.782390 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:18:30.789468 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:18:30.884453 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:18:30.884664 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:18:30.890786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:18:30.893448 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:18:30.893664 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:18:30.896315 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:18:30.940985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:18:30.953383 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:18:30.993715 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:18:31.000214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:18:31.001941 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:18:31.011391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:18:31.013691 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:18:31.023932 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:18:31.025552 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:18:31.033735 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:18:31.036194 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:18:31.039052 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:18:31.043280 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:18:31.044564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:18:31.044947 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:18:31.047023 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:18:31.047570 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:18:31.048039 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:18:31.050397 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:18:31.050631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:18:31.051637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:18:31.052104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:18:31.052917 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:18:31.053253 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:18:31.067085 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:18:31.067316 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:18:31.073906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:18:31.074173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:18:31.078608 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:18:31.086301 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:18:31.086581 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:18:31.090229 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:18:31.092830 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:18:31.096703 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:18:31.096893 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:18:31.101837 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:18:31.102027 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:18:31.107563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:18:31.107794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:18:31.109908 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:18:31.110126 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:18:31.119702 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:18:31.126258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:18:31.126534 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:18:31.152574 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:18:31.159369 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:18:31.159681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:18:31.161490 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:18:31.161667 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:18:31.181397 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:18:31.181620 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:18:31.250161 ignition[1091]: INFO : Ignition 2.21.0 Sep 11 00:18:31.250161 ignition[1091]: INFO : Stage: umount Sep 11 00:18:31.252504 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:18:31.252504 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:18:31.255945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:18:31.259378 ignition[1091]: INFO : umount: umount passed Sep 11 00:18:31.259378 ignition[1091]: INFO : Ignition finished successfully Sep 11 00:18:31.263555 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:18:31.263878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:18:31.269624 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:18:31.269828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:18:31.275030 systemd[1]: Stopped target network.target - Network. Sep 11 00:18:31.277085 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:18:31.277223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:18:31.282710 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:18:31.282833 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:18:31.290924 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:18:31.291091 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:18:31.291785 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:18:31.291856 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:18:31.296019 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:18:31.296157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:18:31.297648 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:18:31.304003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:18:31.316806 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:18:31.317077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:18:31.326133 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:18:31.326542 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:18:31.326709 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:18:31.340940 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:18:31.343248 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:18:31.348241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:18:31.348347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:18:31.359249 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:18:31.360599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:18:31.360707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:18:31.364512 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:18:31.364607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:18:31.370466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:18:31.370815 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:18:31.380657 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:18:31.380813 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:18:31.385359 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:18:31.388946 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:18:31.389369 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:18:31.394160 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:18:31.394708 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:18:31.397768 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:18:31.397892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:18:31.404508 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:18:31.405755 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:18:31.410406 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:18:31.410539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:18:31.417249 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:18:31.417389 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:18:31.421885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:18:31.422037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:18:31.436390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:18:31.437810 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:18:31.437917 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:18:31.447360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:18:31.447480 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:18:31.459276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:18:31.459401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:18:31.477728 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:18:31.477838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:18:31.477912 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:18:31.479033 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:18:31.480443 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:18:31.494069 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:18:31.494228 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:18:31.503635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:18:31.516361 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:18:31.565187 systemd[1]: Switching root. Sep 11 00:18:31.625356 systemd-journald[224]: Journal stopped Sep 11 00:18:34.053858 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 11 00:18:34.053973 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:18:34.054002 kernel: SELinux: policy capability open_perms=1 Sep 11 00:18:34.054027 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:18:34.054044 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:18:34.054059 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:18:34.054082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:18:34.054098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:18:34.054120 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:18:34.054136 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:18:34.054152 kernel: audit: type=1403 audit(1757549912.164:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:18:34.054170 systemd[1]: Successfully loaded SELinux policy in 74.290ms. Sep 11 00:18:34.054207 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.412ms. Sep 11 00:18:34.054230 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:18:34.054253 systemd[1]: Detected virtualization kvm. Sep 11 00:18:34.054278 systemd[1]: Detected architecture x86-64. Sep 11 00:18:34.054296 systemd[1]: Detected first boot. Sep 11 00:18:34.054316 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:18:34.054334 zram_generator::config[1142]: No configuration found. Sep 11 00:18:34.054353 kernel: Guest personality initialized and is inactive Sep 11 00:18:34.054369 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:18:34.054391 kernel: Initialized host personality Sep 11 00:18:34.054416 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:18:34.054434 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:18:34.054455 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:18:34.054473 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:18:34.054490 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:18:34.054507 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:18:34.054524 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:18:34.054554 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:18:34.054572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:18:34.054589 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:18:34.054607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:18:34.054625 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:18:34.054643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:18:34.054660 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:18:34.054677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:18:34.054695 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:18:34.054718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:18:34.054736 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:18:34.054754 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:18:34.054776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:18:34.054794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:18:34.054812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:18:34.054829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:18:34.054846 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:18:34.054867 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:18:34.054884 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:18:34.054904 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:18:34.054941 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:18:34.054976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:18:34.055004 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:18:34.055021 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:18:34.055037 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:18:34.055053 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:18:34.055075 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:18:34.055090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:18:34.055105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:18:34.055120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:18:34.055135 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:18:34.055150 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:18:34.055166 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:18:34.055181 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:18:34.055197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:34.055217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:18:34.055232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:18:34.055247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:18:34.055263 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:18:34.055279 systemd[1]: Reached target machines.target - Containers. Sep 11 00:18:34.055296 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:18:34.055330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:18:34.055351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:18:34.055373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:18:34.055391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:18:34.055408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:18:34.055426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:18:34.055443 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:18:34.055459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:18:34.055476 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:18:34.055493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:18:34.055509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:18:34.055531 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:18:34.055548 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:18:34.055565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:18:34.055582 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:18:34.055598 kernel: loop: module loaded Sep 11 00:18:34.055614 kernel: fuse: init (API version 7.41) Sep 11 00:18:34.055630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:18:34.055648 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:18:34.055665 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:18:34.055687 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:18:34.055719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:18:34.055737 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:18:34.055752 systemd[1]: Stopped verity-setup.service. Sep 11 00:18:34.055774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:34.055791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:18:34.055807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:18:34.055824 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:18:34.055845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:18:34.055865 kernel: ACPI: bus type drm_connector registered Sep 11 00:18:34.055881 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:18:34.055897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:18:34.056122 systemd-journald[1206]: Collecting audit messages is disabled. Sep 11 00:18:34.056171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:18:34.056189 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:18:34.056206 systemd-journald[1206]: Journal started Sep 11 00:18:34.056241 systemd-journald[1206]: Runtime Journal (/run/log/journal/3ce07945723646b0857e2d9f68259f3c) is 6M, max 48.5M, 42.4M free. Sep 11 00:18:33.521420 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:18:33.551827 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:18:33.557616 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:18:34.059928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:18:34.062194 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:18:34.063851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:18:34.065417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:18:34.067658 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:18:34.068140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:18:34.072277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:18:34.072622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:18:34.080820 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:18:34.081426 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:18:34.085621 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:18:34.086060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:18:34.095562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:18:34.099918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:18:34.106820 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:18:34.109569 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:18:34.114286 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:18:34.138738 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:18:34.145665 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:18:34.154132 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:18:34.154290 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:18:34.154336 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:18:34.158728 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:18:34.166936 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:18:34.171027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:18:34.174165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:18:34.180561 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:18:34.184195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:18:34.192753 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:18:34.196206 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:18:34.198275 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:18:34.203313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:18:34.216037 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:18:34.223706 systemd-journald[1206]: Time spent on flushing to /var/log/journal/3ce07945723646b0857e2d9f68259f3c is 263.047ms for 1072 entries. Sep 11 00:18:34.223706 systemd-journald[1206]: System Journal (/var/log/journal/3ce07945723646b0857e2d9f68259f3c) is 8M, max 195.6M, 187.6M free. Sep 11 00:18:34.541224 systemd-journald[1206]: Received client request to flush runtime journal. Sep 11 00:18:34.541377 kernel: loop0: detected capacity change from 0 to 146240 Sep 11 00:18:34.225089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:18:34.229543 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:18:34.233953 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:18:34.310468 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:18:34.313590 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:18:34.452064 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:18:34.548121 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:18:34.564312 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:18:34.570527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:18:34.573066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:18:34.578741 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:18:34.624188 kernel: loop1: detected capacity change from 0 to 221472 Sep 11 00:18:34.668867 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:18:34.681275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:18:34.750014 kernel: loop2: detected capacity change from 0 to 113872 Sep 11 00:18:34.760614 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 11 00:18:34.760645 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 11 00:18:34.776759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:18:35.021240 kernel: loop3: detected capacity change from 0 to 146240 Sep 11 00:18:35.064089 kernel: loop4: detected capacity change from 0 to 221472 Sep 11 00:18:35.101532 kernel: loop5: detected capacity change from 0 to 113872 Sep 11 00:18:35.118786 (sd-merge)[1284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:18:35.119730 (sd-merge)[1284]: Merged extensions into '/usr'. Sep 11 00:18:35.131182 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:18:35.131212 systemd[1]: Reloading... Sep 11 00:18:35.278247 zram_generator::config[1313]: No configuration found. Sep 11 00:18:35.484215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:18:35.613831 systemd[1]: Reloading finished in 481 ms. Sep 11 00:18:35.856824 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:18:35.891690 systemd[1]: Starting ensure-sysext.service... Sep 11 00:18:35.901529 ldconfig[1256]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:18:35.904281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:18:36.200131 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:18:36.200159 systemd[1]: Reloading... Sep 11 00:18:36.231407 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:18:36.231525 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:18:36.232131 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:18:36.232629 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:18:36.234221 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:18:36.234646 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Sep 11 00:18:36.234750 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Sep 11 00:18:36.244314 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:18:36.244338 systemd-tmpfiles[1347]: Skipping /boot Sep 11 00:18:36.270553 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:18:36.270580 systemd-tmpfiles[1347]: Skipping /boot Sep 11 00:18:36.340160 zram_generator::config[1378]: No configuration found. Sep 11 00:18:36.513104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:18:36.656523 systemd[1]: Reloading finished in 455 ms. Sep 11 00:18:36.692045 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:18:36.694634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:18:36.726544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:18:36.744887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:18:36.756324 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:18:36.763566 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:18:36.779438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:18:36.797353 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:18:36.805202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:18:36.815737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:36.819987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:18:36.824165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:18:36.832572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:18:36.843080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:18:36.844598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:18:36.844743 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:18:36.857206 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:18:36.859733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:36.864805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:18:36.865928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:18:36.872833 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:18:36.878700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:18:36.879617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:18:36.885050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:18:36.885435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:18:36.912582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:36.913373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:18:36.916042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:18:36.923358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:18:36.930288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:18:36.934678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:18:36.934885 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:18:36.942699 systemd-udevd[1425]: Using default interface naming scheme 'v255'. Sep 11 00:18:36.948348 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:18:36.952077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:36.954375 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:18:36.960181 augenrules[1452]: No rules Sep 11 00:18:36.963286 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:18:36.963649 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:18:36.965991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:18:36.966305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:18:36.970909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:18:36.971753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:18:36.976059 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:18:36.976571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:18:37.005395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:37.012305 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:18:37.014007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:18:37.016389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:18:37.026546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:18:37.030312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:18:37.036410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:18:37.037873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:18:37.038104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:18:37.038293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:18:37.042290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:18:37.047316 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:18:37.050990 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:18:37.052892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:18:37.053464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:18:37.055865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:18:37.064926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:18:37.069137 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:18:37.069447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:18:37.072093 augenrules[1462]: /sbin/augenrules: No change Sep 11 00:18:37.082043 systemd[1]: Finished ensure-sysext.service. Sep 11 00:18:37.087618 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:18:37.092915 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:18:37.093794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:18:37.097292 augenrules[1517]: No rules Sep 11 00:18:37.105031 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:18:37.105391 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:18:37.123377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:18:37.128386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:18:37.128514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:18:37.137179 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:18:37.139519 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:18:37.190170 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:18:37.287993 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:18:37.326023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 11 00:18:37.332988 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:18:37.344678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:18:37.348226 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:18:37.374548 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:18:37.375011 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:18:37.375258 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:18:37.391633 systemd-resolved[1418]: Positive Trust Anchors: Sep 11 00:18:37.391657 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:18:37.391811 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:18:37.398162 systemd-resolved[1418]: Defaulting to hostname 'linux'. Sep 11 00:18:37.400444 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:18:37.400650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:18:37.462883 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:18:37.640486 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:18:37.646095 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:18:37.647949 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:18:37.650649 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:18:37.653833 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:18:37.658558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:18:37.664253 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:18:37.664505 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:18:37.666092 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:18:37.670641 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:18:37.672326 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:18:37.675349 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:18:37.683017 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:18:37.684835 systemd-networkd[1527]: lo: Link UP Sep 11 00:18:37.685934 systemd-networkd[1527]: lo: Gained carrier Sep 11 00:18:37.688868 systemd-networkd[1527]: Enumeration completed Sep 11 00:18:37.698310 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:18:37.700848 systemd-networkd[1527]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:18:37.700865 systemd-networkd[1527]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:18:37.703257 systemd-networkd[1527]: eth0: Link UP Sep 11 00:18:37.703520 systemd-networkd[1527]: eth0: Gained carrier Sep 11 00:18:37.703558 systemd-networkd[1527]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:18:37.718441 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:18:37.730676 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:18:37.732952 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:18:37.754892 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:18:37.757076 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:18:37.758165 systemd-networkd[1527]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:18:37.759905 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Sep 11 00:18:37.761597 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:18:37.765041 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:18:39.266916 systemd-resolved[1418]: Clock change detected. Flushing caches. Sep 11 00:18:39.267132 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:18:39.267213 systemd-timesyncd[1529]: Initial clock synchronization to Thu 2025-09-11 00:18:39.264083 UTC. Sep 11 00:18:39.269337 kernel: kvm_amd: TSC scaling supported Sep 11 00:18:39.269386 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:18:39.269422 kernel: kvm_amd: Nested Paging enabled Sep 11 00:18:39.269443 kernel: kvm_amd: LBR virtualization supported Sep 11 00:18:39.271366 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:18:39.271416 kernel: kvm_amd: Virtual GIF supported Sep 11 00:18:39.288691 systemd[1]: Reached target network.target - Network. Sep 11 00:18:39.290994 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:18:39.292324 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:18:39.293798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:18:39.293838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:18:39.298267 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:18:39.305249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:18:39.309093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:18:39.314953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:18:39.319326 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:18:39.320774 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:18:39.325333 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:18:39.336982 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:18:39.338402 jq[1566]: false Sep 11 00:18:39.340354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:18:39.344955 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:18:39.348825 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache Sep 11 00:18:39.348782 oslogin_cache_refresh[1568]: Refreshing passwd entry cache Sep 11 00:18:39.355184 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:18:39.357876 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:18:39.369159 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:18:39.374681 oslogin_cache_refresh[1568]: Failure getting users, quitting Sep 11 00:18:39.377068 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting Sep 11 00:18:39.377068 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:18:39.377068 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache Sep 11 00:18:39.374712 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:18:39.374811 oslogin_cache_refresh[1568]: Refreshing group entry cache Sep 11 00:18:39.378062 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:18:39.384352 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:18:39.391463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:18:39.397460 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:18:39.399374 oslogin_cache_refresh[1568]: Failure getting groups, quitting Sep 11 00:18:39.401151 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting Sep 11 00:18:39.401151 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:18:39.398349 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:18:39.399396 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:18:39.416081 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:18:39.427209 extend-filesystems[1567]: Found /dev/vda6 Sep 11 00:18:39.428868 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:18:39.435315 extend-filesystems[1567]: Found /dev/vda9 Sep 11 00:18:39.439697 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:18:39.442074 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:18:39.442584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:18:39.443100 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:18:39.443478 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:18:39.448075 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:18:39.461494 extend-filesystems[1567]: Checking size of /dev/vda9 Sep 11 00:18:39.449127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:18:39.459210 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:18:39.549898 extend-filesystems[1567]: Resized partition /dev/vda9 Sep 11 00:18:39.561790 extend-filesystems[1602]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:18:39.570872 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:18:39.583328 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:18:39.597746 jq[1588]: true Sep 11 00:18:39.625975 tar[1593]: linux-amd64/helm Sep 11 00:18:39.626381 update_engine[1586]: I20250911 00:18:39.625463 1586 main.cc:92] Flatcar Update Engine starting Sep 11 00:18:39.665290 dbus-daemon[1564]: [system] SELinux support is enabled Sep 11 00:18:39.665700 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:18:39.681438 update_engine[1586]: I20250911 00:18:39.679939 1586 update_check_scheduler.cc:74] Next update check in 2m30s Sep 11 00:18:39.677261 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:18:39.677289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:18:39.677330 (ntainerd)[1608]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:18:39.679252 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:18:39.679277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:18:39.702678 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:18:39.711444 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:18:39.725770 jq[1611]: true Sep 11 00:18:39.737267 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:18:39.753071 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:18:39.789808 extend-filesystems[1602]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:18:39.789808 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:18:39.789808 extend-filesystems[1602]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:18:39.790478 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Sep 11 00:18:39.792642 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:18:39.793080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:18:39.836179 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:18:39.836224 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:18:39.839104 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:18:39.840150 systemd-logind[1579]: New seat seat0. Sep 11 00:18:39.852355 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:18:39.858341 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:18:39.882201 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:18:39.897739 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:18:39.900333 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:18:39.948638 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:18:39.971168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:18:39.983044 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:18:40.235136 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:18:40.235570 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:18:40.247666 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:18:40.278655 systemd-networkd[1527]: eth0: Gained IPv6LL Sep 11 00:18:40.317401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:18:40.327954 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:18:40.330461 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:18:40.338281 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:18:40.351990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:18:40.356703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:18:40.365218 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:53770.service - OpenSSH per-connection server daemon (10.0.0.1:53770). Sep 11 00:18:40.369655 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:18:40.405330 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:18:40.409129 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:18:40.413237 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:18:40.458198 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:18:40.458632 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:18:40.471702 containerd[1608]: time="2025-09-11T00:18:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:18:40.471702 containerd[1608]: time="2025-09-11T00:18:40.469649224Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:18:40.494426 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:18:40.499490 containerd[1608]: time="2025-09-11T00:18:40.499414885Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.677µs" Sep 11 00:18:40.499490 containerd[1608]: time="2025-09-11T00:18:40.499482632Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:18:40.499574 containerd[1608]: time="2025-09-11T00:18:40.499511446Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:18:40.500081 containerd[1608]: time="2025-09-11T00:18:40.500039747Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:18:40.500081 containerd[1608]: time="2025-09-11T00:18:40.500077568Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:18:40.500158 containerd[1608]: time="2025-09-11T00:18:40.500122713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500268 containerd[1608]: time="2025-09-11T00:18:40.500230775Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500268 containerd[1608]: time="2025-09-11T00:18:40.500260601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500773 containerd[1608]: time="2025-09-11T00:18:40.500714372Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500773 containerd[1608]: time="2025-09-11T00:18:40.500762172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500896 containerd[1608]: time="2025-09-11T00:18:40.500779705Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:18:40.500896 containerd[1608]: time="2025-09-11T00:18:40.500791567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:18:40.501026 containerd[1608]: time="2025-09-11T00:18:40.500988326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:18:40.501387 containerd[1608]: time="2025-09-11T00:18:40.501346728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:18:40.501446 containerd[1608]: time="2025-09-11T00:18:40.501400639Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:18:40.501446 containerd[1608]: time="2025-09-11T00:18:40.501417711Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:18:40.501518 containerd[1608]: time="2025-09-11T00:18:40.501484296Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:18:40.502169 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:18:40.502270 containerd[1608]: time="2025-09-11T00:18:40.502197153Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:18:40.502347 containerd[1608]: time="2025-09-11T00:18:40.502318430Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:18:40.948005 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 53770 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:40.952179 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302605124Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302757830Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302788037Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302807383Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302834113Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302882243Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302919934Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302939230Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302955521Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302974406Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.302989595Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.303009031Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.303303513Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:18:41.307939 containerd[1608]: time="2025-09-11T00:18:41.303373975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303410554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303429600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303447473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303463403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303478061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303492919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303507586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303521111Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303538915Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.303651396Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.306885361Z" level=info msg="Start snapshots syncer" Sep 11 00:18:41.308377 containerd[1608]: time="2025-09-11T00:18:41.306979086Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:18:41.308695 containerd[1608]: time="2025-09-11T00:18:41.307346165Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:18:41.308695 containerd[1608]: time="2025-09-11T00:18:41.307456762Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.307623705Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308013927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308092845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308116169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308131678Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308171693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308205406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308223270Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308295996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308312788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308368492Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308433855Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308460324Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:18:41.308882 containerd[1608]: time="2025-09-11T00:18:41.308473309Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308485852Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308499117Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308530185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308552217Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308580580Z" level=info msg="runtime interface created" Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308589687Z" level=info msg="created NRI interface" Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308601569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308625193Z" level=info msg="Connect containerd service" Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.308661341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:18:41.314631 containerd[1608]: time="2025-09-11T00:18:41.309918910Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:18:41.318298 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:18:41.335599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:18:41.364950 systemd-logind[1579]: New session 1 of user core. Sep 11 00:18:41.450839 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:18:41.459161 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:18:41.579612 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:18:41.584937 systemd-logind[1579]: New session c1 of user core. Sep 11 00:18:41.730210 containerd[1608]: time="2025-09-11T00:18:41.727541451Z" level=info msg="Start subscribing containerd event" Sep 11 00:18:41.730210 containerd[1608]: time="2025-09-11T00:18:41.730213021Z" level=info msg="Start recovering state" Sep 11 00:18:41.730760 containerd[1608]: time="2025-09-11T00:18:41.730398058Z" level=info msg="Start event monitor" Sep 11 00:18:41.730760 containerd[1608]: time="2025-09-11T00:18:41.730430509Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:18:41.730760 containerd[1608]: time="2025-09-11T00:18:41.730442071Z" level=info msg="Start streaming server" Sep 11 00:18:41.730760 containerd[1608]: time="2025-09-11T00:18:41.730561324Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:18:41.730760 containerd[1608]: time="2025-09-11T00:18:41.730580089Z" level=info msg="runtime interface starting up..." Sep 11 00:18:41.734101 containerd[1608]: time="2025-09-11T00:18:41.731286985Z" level=info msg="starting plugins..." Sep 11 00:18:41.734101 containerd[1608]: time="2025-09-11T00:18:41.731329765Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:18:41.734101 containerd[1608]: time="2025-09-11T00:18:41.731473815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:18:41.734101 containerd[1608]: time="2025-09-11T00:18:41.731534199Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:18:41.731926 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:18:41.735180 containerd[1608]: time="2025-09-11T00:18:41.735087553Z" level=info msg="containerd successfully booted in 1.267673s" Sep 11 00:18:41.952593 systemd[1700]: Queued start job for default target default.target. Sep 11 00:18:42.008673 tar[1593]: linux-amd64/LICENSE Sep 11 00:18:42.009300 tar[1593]: linux-amd64/README.md Sep 11 00:18:42.016146 systemd[1700]: Created slice app.slice - User Application Slice. Sep 11 00:18:42.016191 systemd[1700]: Reached target paths.target - Paths. Sep 11 00:18:42.016262 systemd[1700]: Reached target timers.target - Timers. Sep 11 00:18:42.019013 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:18:42.034344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:18:42.044058 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:18:42.044274 systemd[1700]: Reached target sockets.target - Sockets. Sep 11 00:18:42.044360 systemd[1700]: Reached target basic.target - Basic System. Sep 11 00:18:42.044428 systemd[1700]: Reached target default.target - Main User Target. Sep 11 00:18:42.044479 systemd[1700]: Startup finished in 423ms. Sep 11 00:18:42.066899 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:18:42.106142 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:18:42.409149 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:53780.service - OpenSSH per-connection server daemon (10.0.0.1:53780). Sep 11 00:18:43.096082 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 53780 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:43.099616 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:43.112538 systemd-logind[1579]: New session 2 of user core. Sep 11 00:18:43.126266 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:18:43.210490 sshd[1725]: Connection closed by 10.0.0.1 port 53780 Sep 11 00:18:43.213147 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:43.371824 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:53780.service: Deactivated successfully. Sep 11 00:18:43.378299 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:18:43.384487 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:18:43.389790 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:53792.service - OpenSSH per-connection server daemon (10.0.0.1:53792). Sep 11 00:18:43.402215 systemd-logind[1579]: Removed session 2. Sep 11 00:18:43.475899 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 53792 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:43.478105 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:43.486459 systemd-logind[1579]: New session 3 of user core. Sep 11 00:18:43.530002 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:18:43.616065 sshd[1733]: Connection closed by 10.0.0.1 port 53792 Sep 11 00:18:43.616881 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:43.630739 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:53792.service: Deactivated successfully. Sep 11 00:18:43.636816 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:18:43.639550 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:18:43.642219 systemd-logind[1579]: Removed session 3. Sep 11 00:18:45.918695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:18:45.919462 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:18:45.920685 systemd[1]: Startup finished in 8.098s (kernel) + 14.300s (initrd) + 12.332s (userspace) = 34.731s. Sep 11 00:18:45.972698 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:18:48.538077 kubelet[1747]: E0911 00:18:48.537923 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:18:48.545116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:18:48.545411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:18:48.546007 systemd[1]: kubelet.service: Consumed 5.770s CPU time, 267.2M memory peak. Sep 11 00:18:53.647475 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Sep 11 00:18:53.762286 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:53.766441 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:53.781308 systemd-logind[1579]: New session 4 of user core. Sep 11 00:18:53.792266 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:18:53.879049 sshd[1758]: Connection closed by 10.0.0.1 port 50030 Sep 11 00:18:53.879641 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:53.938944 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:50030.service: Deactivated successfully. Sep 11 00:18:53.941264 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:18:53.948424 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:18:53.949555 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:50044.service - OpenSSH per-connection server daemon (10.0.0.1:50044). Sep 11 00:18:53.954093 systemd-logind[1579]: Removed session 4. Sep 11 00:18:54.051408 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 50044 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:54.054385 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:54.076660 systemd-logind[1579]: New session 5 of user core. Sep 11 00:18:54.095095 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:18:54.179279 sshd[1766]: Connection closed by 10.0.0.1 port 50044 Sep 11 00:18:54.180098 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:54.197983 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:50044.service: Deactivated successfully. Sep 11 00:18:54.204905 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:18:54.213521 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:18:54.229964 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:50050.service - OpenSSH per-connection server daemon (10.0.0.1:50050). Sep 11 00:18:54.232110 systemd-logind[1579]: Removed session 5. Sep 11 00:18:54.304659 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 50050 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:54.308801 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:54.323536 systemd-logind[1579]: New session 6 of user core. Sep 11 00:18:54.336371 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:18:54.441074 sshd[1774]: Connection closed by 10.0.0.1 port 50050 Sep 11 00:18:54.438674 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:54.475056 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:50050.service: Deactivated successfully. Sep 11 00:18:54.479541 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:18:54.483930 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:18:54.492815 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:50058.service - OpenSSH per-connection server daemon (10.0.0.1:50058). Sep 11 00:18:54.497181 systemd-logind[1579]: Removed session 6. Sep 11 00:18:54.586014 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 50058 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:54.590638 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:54.602309 systemd-logind[1579]: New session 7 of user core. Sep 11 00:18:54.617230 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:18:54.702447 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:18:54.703008 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:18:54.741136 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 11 00:18:54.746568 sshd[1782]: Connection closed by 10.0.0.1 port 50058 Sep 11 00:18:54.746953 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:54.773102 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:50058.service: Deactivated successfully. Sep 11 00:18:54.779773 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:18:54.783443 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:18:54.794916 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:50060.service - OpenSSH per-connection server daemon (10.0.0.1:50060). Sep 11 00:18:54.799160 systemd-logind[1579]: Removed session 7. Sep 11 00:18:54.902765 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 50060 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:54.903703 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:54.923700 systemd-logind[1579]: New session 8 of user core. Sep 11 00:18:54.938093 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:18:55.015349 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:18:55.015803 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:18:55.031752 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 11 00:18:55.045478 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:18:55.045985 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:18:55.072854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:18:55.191368 augenrules[1815]: No rules Sep 11 00:18:55.193168 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:18:55.193564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:18:55.197400 sudo[1792]: pam_unix(sudo:session): session closed for user root Sep 11 00:18:55.205743 sshd[1791]: Connection closed by 10.0.0.1 port 50060 Sep 11 00:18:55.206294 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:55.218610 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:50060.service: Deactivated successfully. Sep 11 00:18:55.223639 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:18:55.225042 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:18:55.230675 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:50064.service - OpenSSH per-connection server daemon (10.0.0.1:50064). Sep 11 00:18:55.231712 systemd-logind[1579]: Removed session 8. Sep 11 00:18:55.332123 sshd[1824]: Accepted publickey for core from 10.0.0.1 port 50064 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:18:55.336771 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:55.350712 systemd-logind[1579]: New session 9 of user core. Sep 11 00:18:55.363365 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:18:55.433687 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:18:55.436081 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:18:58.796156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:18:58.803560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:18:58.941096 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:18:58.959612 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:18:59.327048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:18:59.346677 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:18:59.712211 kubelet[1857]: E0911 00:18:59.712004 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:18:59.722899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:18:59.723147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:18:59.723703 systemd[1]: kubelet.service: Consumed 581ms CPU time, 110.9M memory peak. Sep 11 00:19:01.632459 dockerd[1851]: time="2025-09-11T00:19:01.632331938Z" level=info msg="Starting up" Sep 11 00:19:01.642904 dockerd[1851]: time="2025-09-11T00:19:01.642730880Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:19:01.853595 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3731890415-merged.mount: Deactivated successfully. Sep 11 00:19:01.947407 dockerd[1851]: time="2025-09-11T00:19:01.947189508Z" level=info msg="Loading containers: start." Sep 11 00:19:02.235132 kernel: Initializing XFRM netlink socket Sep 11 00:19:03.155683 systemd-networkd[1527]: docker0: Link UP Sep 11 00:19:03.173020 dockerd[1851]: time="2025-09-11T00:19:03.172929358Z" level=info msg="Loading containers: done." Sep 11 00:19:03.508111 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3174817875-merged.mount: Deactivated successfully. Sep 11 00:19:03.523575 dockerd[1851]: time="2025-09-11T00:19:03.521995772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:19:03.523575 dockerd[1851]: time="2025-09-11T00:19:03.523227933Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:19:03.526424 dockerd[1851]: time="2025-09-11T00:19:03.524251022Z" level=info msg="Initializing buildkit" Sep 11 00:19:03.654658 dockerd[1851]: time="2025-09-11T00:19:03.654576519Z" level=info msg="Completed buildkit initialization" Sep 11 00:19:03.674401 dockerd[1851]: time="2025-09-11T00:19:03.673120154Z" level=info msg="Daemon has completed initialization" Sep 11 00:19:03.676605 dockerd[1851]: time="2025-09-11T00:19:03.676475797Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:19:03.677292 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:19:05.874782 containerd[1608]: time="2025-09-11T00:19:05.874683401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 00:19:07.252239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739267574.mount: Deactivated successfully. Sep 11 00:19:10.218307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:19:10.229321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:10.825383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:10.850214 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:19:11.072484 kubelet[2144]: E0911 00:19:11.072355 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:19:11.088781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:19:11.089312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:19:11.089987 systemd[1]: kubelet.service: Consumed 501ms CPU time, 110.3M memory peak. Sep 11 00:19:13.371431 containerd[1608]: time="2025-09-11T00:19:13.371339897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:13.376747 containerd[1608]: time="2025-09-11T00:19:13.376617370Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 11 00:19:13.378994 containerd[1608]: time="2025-09-11T00:19:13.378788175Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:13.386448 containerd[1608]: time="2025-09-11T00:19:13.385277946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:13.390906 containerd[1608]: time="2025-09-11T00:19:13.387764071Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 7.513014385s" Sep 11 00:19:13.390906 containerd[1608]: time="2025-09-11T00:19:13.389823965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 11 00:19:13.400727 containerd[1608]: time="2025-09-11T00:19:13.394126871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 00:19:17.849738 containerd[1608]: time="2025-09-11T00:19:17.847799776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:17.849738 containerd[1608]: time="2025-09-11T00:19:17.849364767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 11 00:19:17.863553 containerd[1608]: time="2025-09-11T00:19:17.861139356Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:17.865971 containerd[1608]: time="2025-09-11T00:19:17.865788820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:17.881591 containerd[1608]: time="2025-09-11T00:19:17.879995135Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 4.48109973s" Sep 11 00:19:17.881591 containerd[1608]: time="2025-09-11T00:19:17.880093762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 11 00:19:17.881591 containerd[1608]: time="2025-09-11T00:19:17.880889291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 00:19:21.129366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:19:21.142691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:22.307730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:22.318464 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:19:23.034238 containerd[1608]: time="2025-09-11T00:19:23.033250205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:23.040195 containerd[1608]: time="2025-09-11T00:19:23.036882082Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 11 00:19:23.050526 containerd[1608]: time="2025-09-11T00:19:23.049727991Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:23.067548 containerd[1608]: time="2025-09-11T00:19:23.062638071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:23.067548 containerd[1608]: time="2025-09-11T00:19:23.064715348Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 5.183787474s" Sep 11 00:19:23.067548 containerd[1608]: time="2025-09-11T00:19:23.065826750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 11 00:19:23.067548 containerd[1608]: time="2025-09-11T00:19:23.066471228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 00:19:23.091899 kubelet[2169]: E0911 00:19:23.091641 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:19:23.099455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:19:23.099781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:19:23.100387 systemd[1]: kubelet.service: Consumed 1.048s CPU time, 110.8M memory peak. Sep 11 00:19:25.209411 update_engine[1586]: I20250911 00:19:25.209254 1586 update_attempter.cc:509] Updating boot flags... Sep 11 00:19:25.739949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826860333.mount: Deactivated successfully. Sep 11 00:19:28.553946 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1336250894 wd_nsec: 1336250416 Sep 11 00:19:29.413884 containerd[1608]: time="2025-09-11T00:19:29.413771238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:29.415811 containerd[1608]: time="2025-09-11T00:19:29.415442840Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 11 00:19:29.418001 containerd[1608]: time="2025-09-11T00:19:29.417641196Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:29.422640 containerd[1608]: time="2025-09-11T00:19:29.422547769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:29.424209 containerd[1608]: time="2025-09-11T00:19:29.424140052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 6.35758166s" Sep 11 00:19:29.424209 containerd[1608]: time="2025-09-11T00:19:29.424199624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 11 00:19:29.426603 containerd[1608]: time="2025-09-11T00:19:29.426094027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:19:30.334581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913562518.mount: Deactivated successfully. Sep 11 00:19:32.906596 containerd[1608]: time="2025-09-11T00:19:32.906493073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:32.910933 containerd[1608]: time="2025-09-11T00:19:32.910829272Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:19:32.913734 containerd[1608]: time="2025-09-11T00:19:32.913660605Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:32.917944 containerd[1608]: time="2025-09-11T00:19:32.917456286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:32.919534 containerd[1608]: time="2025-09-11T00:19:32.919471994Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.493322854s" Sep 11 00:19:32.919534 containerd[1608]: time="2025-09-11T00:19:32.919523261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:19:32.928231 containerd[1608]: time="2025-09-11T00:19:32.927826203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:19:33.129209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 11 00:19:33.137069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:33.553530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:33.570542 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:19:33.740793 kubelet[2259]: E0911 00:19:33.740457 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:19:33.751078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:19:33.751381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:19:33.754128 systemd[1]: kubelet.service: Consumed 356ms CPU time, 111.1M memory peak. Sep 11 00:19:34.012742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904769595.mount: Deactivated successfully. Sep 11 00:19:34.030322 containerd[1608]: time="2025-09-11T00:19:34.030149740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:19:34.033077 containerd[1608]: time="2025-09-11T00:19:34.032936227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:19:34.037540 containerd[1608]: time="2025-09-11T00:19:34.037456287Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:19:34.048810 containerd[1608]: time="2025-09-11T00:19:34.042529921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:19:34.048810 containerd[1608]: time="2025-09-11T00:19:34.045187144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.117240565s" Sep 11 00:19:34.048810 containerd[1608]: time="2025-09-11T00:19:34.046245317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:19:34.048810 containerd[1608]: time="2025-09-11T00:19:34.047270727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 00:19:34.804784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377732366.mount: Deactivated successfully. Sep 11 00:19:42.505888 containerd[1608]: time="2025-09-11T00:19:42.505682810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:42.508275 containerd[1608]: time="2025-09-11T00:19:42.508163374Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 11 00:19:42.510005 containerd[1608]: time="2025-09-11T00:19:42.509913012Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:42.517298 containerd[1608]: time="2025-09-11T00:19:42.515774902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:42.519320 containerd[1608]: time="2025-09-11T00:19:42.519219739Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 8.47191124s" Sep 11 00:19:42.519320 containerd[1608]: time="2025-09-11T00:19:42.519292034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 11 00:19:43.966681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 11 00:19:43.978136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:44.461042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:44.479499 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:19:44.578555 kubelet[2358]: E0911 00:19:44.578448 2358 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:19:44.590489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:19:44.590810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:19:44.594389 systemd[1]: kubelet.service: Consumed 395ms CPU time, 109M memory peak. Sep 11 00:19:46.164159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:46.164457 systemd[1]: kubelet.service: Consumed 395ms CPU time, 109M memory peak. Sep 11 00:19:46.170801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:46.228661 systemd[1]: Reload requested from client PID 2375 ('systemctl') (unit session-9.scope)... Sep 11 00:19:46.230338 systemd[1]: Reloading... Sep 11 00:19:46.454962 zram_generator::config[2423]: No configuration found. Sep 11 00:19:47.193469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:19:47.405636 systemd[1]: Reloading finished in 1173 ms. Sep 11 00:19:47.491507 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:19:47.491667 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:19:47.492166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:47.492259 systemd[1]: kubelet.service: Consumed 707ms CPU time, 98.3M memory peak. Sep 11 00:19:47.499755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:48.233538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:48.259545 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:19:48.421456 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:19:48.421456 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:19:48.421456 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:19:48.422110 kubelet[2465]: I0911 00:19:48.421460 2465 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:19:48.822985 kubelet[2465]: I0911 00:19:48.822822 2465 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:19:48.822985 kubelet[2465]: I0911 00:19:48.822968 2465 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:19:48.825444 kubelet[2465]: I0911 00:19:48.825374 2465 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:19:48.865330 kubelet[2465]: E0911 00:19:48.865248 2465 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:48.869885 kubelet[2465]: I0911 00:19:48.869782 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:19:48.885375 kubelet[2465]: I0911 00:19:48.885316 2465 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:19:48.899802 kubelet[2465]: I0911 00:19:48.899723 2465 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:19:48.901258 kubelet[2465]: I0911 00:19:48.901193 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:19:48.901559 kubelet[2465]: I0911 00:19:48.901486 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:19:48.901813 kubelet[2465]: I0911 00:19:48.901543 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:19:48.902024 kubelet[2465]: I0911 00:19:48.901826 2465 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:19:48.902024 kubelet[2465]: I0911 00:19:48.901860 2465 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:19:48.902098 kubelet[2465]: I0911 00:19:48.902061 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:19:48.907256 kubelet[2465]: I0911 00:19:48.907131 2465 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:19:48.907256 kubelet[2465]: I0911 00:19:48.907195 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:19:48.907467 kubelet[2465]: I0911 00:19:48.907286 2465 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:19:48.907467 kubelet[2465]: I0911 00:19:48.907321 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:19:48.912734 kubelet[2465]: W0911 00:19:48.911949 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:48.912734 kubelet[2465]: E0911 00:19:48.912049 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:48.912734 kubelet[2465]: W0911 00:19:48.912159 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:48.914357 kubelet[2465]: E0911 00:19:48.914299 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:48.914357 kubelet[2465]: I0911 00:19:48.912192 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:19:48.914969 kubelet[2465]: I0911 00:19:48.914916 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:19:48.915950 kubelet[2465]: W0911 00:19:48.915914 2465 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:19:48.920959 kubelet[2465]: I0911 00:19:48.920893 2465 server.go:1274] "Started kubelet" Sep 11 00:19:48.921964 kubelet[2465]: I0911 00:19:48.921419 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:19:48.922321 kubelet[2465]: I0911 00:19:48.922252 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:19:48.922736 kubelet[2465]: I0911 00:19:48.922708 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:19:48.924334 kubelet[2465]: I0911 00:19:48.924294 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:19:48.927300 kubelet[2465]: I0911 00:19:48.926551 2465 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:19:48.931418 kubelet[2465]: I0911 00:19:48.930351 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:19:48.933726 kubelet[2465]: E0911 00:19:48.933674 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:19:48.935324 kubelet[2465]: I0911 00:19:48.935219 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:19:48.935498 kubelet[2465]: I0911 00:19:48.935219 2465 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:19:48.935588 kubelet[2465]: I0911 00:19:48.935568 2465 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:19:48.936119 kubelet[2465]: E0911 00:19:48.936055 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:19:48.938935 kubelet[2465]: I0911 00:19:48.938894 2465 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:19:48.938935 kubelet[2465]: E0911 00:19:48.939215 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Sep 11 00:19:48.938935 kubelet[2465]: W0911 00:19:48.939133 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:48.938935 kubelet[2465]: I0911 00:19:48.939255 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:19:48.938935 kubelet[2465]: E0911 00:19:48.939290 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:48.942884 kubelet[2465]: E0911 00:19:48.939621 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412672c9dcb6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:19:48.92080625 +0000 UTC m=+0.651719366,LastTimestamp:2025-09-11 00:19:48.92080625 +0000 UTC m=+0.651719366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:19:48.944027 kubelet[2465]: I0911 00:19:48.943998 2465 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:19:48.974913 kubelet[2465]: I0911 00:19:48.973417 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:19:48.974913 kubelet[2465]: I0911 00:19:48.973456 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:19:48.974913 kubelet[2465]: I0911 00:19:48.973494 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:19:49.034969 kubelet[2465]: E0911 00:19:49.034840 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:19:49.091811 kubelet[2465]: I0911 00:19:49.090484 2465 policy_none.go:49] "None policy: Start" Sep 11 00:19:49.094505 kubelet[2465]: I0911 00:19:49.094470 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:19:49.094688 kubelet[2465]: I0911 00:19:49.094676 2465 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:19:49.135569 kubelet[2465]: E0911 00:19:49.135467 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:19:49.157888 kubelet[2465]: E0911 00:19:49.147543 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Sep 11 00:19:49.163974 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:19:49.177266 kubelet[2465]: I0911 00:19:49.174913 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:19:49.178653 kubelet[2465]: I0911 00:19:49.178033 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:19:49.178653 kubelet[2465]: I0911 00:19:49.178073 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:19:49.178653 kubelet[2465]: I0911 00:19:49.178116 2465 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:19:49.180936 kubelet[2465]: E0911 00:19:49.178182 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:19:49.180936 kubelet[2465]: W0911 00:19:49.179832 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:49.180936 kubelet[2465]: E0911 00:19:49.179932 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:49.198505 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:19:49.206189 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:19:49.220239 kubelet[2465]: I0911 00:19:49.218709 2465 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:19:49.220239 kubelet[2465]: I0911 00:19:49.219576 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:19:49.222646 kubelet[2465]: I0911 00:19:49.222289 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:19:49.223344 kubelet[2465]: I0911 00:19:49.222810 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:19:49.232539 kubelet[2465]: E0911 00:19:49.232359 2465 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:19:49.298511 systemd[1]: Created slice kubepods-burstable-pod9341c4b560676b6356445bf5ac3702f6.slice - libcontainer container kubepods-burstable-pod9341c4b560676b6356445bf5ac3702f6.slice. Sep 11 00:19:49.325610 kubelet[2465]: I0911 00:19:49.325079 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:19:49.325610 kubelet[2465]: E0911 00:19:49.325521 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 11 00:19:49.337738 kubelet[2465]: I0911 00:19:49.336985 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:19:49.337738 kubelet[2465]: I0911 00:19:49.337029 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:19:49.337738 kubelet[2465]: I0911 00:19:49.337057 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:19:49.337738 kubelet[2465]: I0911 00:19:49.337079 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:19:49.337738 kubelet[2465]: I0911 00:19:49.337098 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:19:49.338086 kubelet[2465]: I0911 00:19:49.337119 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:19:49.338086 kubelet[2465]: I0911 00:19:49.337905 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:19:49.338086 kubelet[2465]: I0911 00:19:49.337928 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:19:49.338086 kubelet[2465]: I0911 00:19:49.337974 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:19:49.347150 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 00:19:49.391566 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 00:19:49.530252 kubelet[2465]: I0911 00:19:49.529349 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:19:49.530252 kubelet[2465]: E0911 00:19:49.529803 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 11 00:19:49.563477 kubelet[2465]: E0911 00:19:49.563397 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Sep 11 00:19:49.637956 kubelet[2465]: E0911 00:19:49.637237 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:49.641554 containerd[1608]: time="2025-09-11T00:19:49.641468686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9341c4b560676b6356445bf5ac3702f6,Namespace:kube-system,Attempt:0,}" Sep 11 00:19:49.682028 kubelet[2465]: E0911 00:19:49.681954 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:49.682784 containerd[1608]: time="2025-09-11T00:19:49.682729024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:19:49.699623 kubelet[2465]: E0911 00:19:49.699543 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:49.701016 containerd[1608]: time="2025-09-11T00:19:49.700963732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 00:19:49.880198 containerd[1608]: time="2025-09-11T00:19:49.878284521Z" level=info msg="connecting to shim 9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124" address="unix:///run/containerd/s/9fa66f134d0eae4be0a76674d288361379dc9d1b3dee866493b805c94dbf0487" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:19:49.909479 containerd[1608]: time="2025-09-11T00:19:49.908611553Z" level=info msg="connecting to shim 017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4" address="unix:///run/containerd/s/9aa121cbfc4a5733efa62be2b92bc5a108b6c8afb565ce85f5e7ee338e1b8d6d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:19:49.932239 kubelet[2465]: I0911 00:19:49.931908 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:19:49.932430 kubelet[2465]: E0911 00:19:49.932314 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 11 00:19:49.964306 containerd[1608]: time="2025-09-11T00:19:49.963564227Z" level=info msg="connecting to shim 95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb" address="unix:///run/containerd/s/d1b13d7155e546ece0c9199de674138eba3a0eaacc7c849e8f84ef5cb3106cc0" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:19:49.974122 systemd[1]: Started cri-containerd-9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124.scope - libcontainer container 9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124. Sep 11 00:19:49.982309 systemd[1]: Started cri-containerd-017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4.scope - libcontainer container 017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4. Sep 11 00:19:50.104217 kubelet[2465]: W0911 00:19:50.102529 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:50.104217 kubelet[2465]: E0911 00:19:50.102620 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:50.161407 kubelet[2465]: W0911 00:19:50.160625 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:50.161407 kubelet[2465]: E0911 00:19:50.160698 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:50.200943 systemd[1]: Started cri-containerd-95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb.scope - libcontainer container 95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb. Sep 11 00:19:50.229829 containerd[1608]: time="2025-09-11T00:19:50.229770382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9341c4b560676b6356445bf5ac3702f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124\"" Sep 11 00:19:50.233016 kubelet[2465]: E0911 00:19:50.232332 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:50.240134 containerd[1608]: time="2025-09-11T00:19:50.238495999Z" level=info msg="CreateContainer within sandbox \"9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:19:50.246297 containerd[1608]: time="2025-09-11T00:19:50.244825637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4\"" Sep 11 00:19:50.250089 kubelet[2465]: E0911 00:19:50.250052 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:50.256705 containerd[1608]: time="2025-09-11T00:19:50.256654322Z" level=info msg="CreateContainer within sandbox \"017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:19:50.279222 containerd[1608]: time="2025-09-11T00:19:50.279153747Z" level=info msg="Container 78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:19:50.284919 containerd[1608]: time="2025-09-11T00:19:50.284815520Z" level=info msg="Container 4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:19:50.310935 containerd[1608]: time="2025-09-11T00:19:50.310868799Z" level=info msg="CreateContainer within sandbox \"9aeb1ba131531cf790a2d769c10d12243121216e9f67a20797b6aefea0b1a124\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503\"" Sep 11 00:19:50.312068 containerd[1608]: time="2025-09-11T00:19:50.311997569Z" level=info msg="StartContainer for \"78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503\"" Sep 11 00:19:50.314061 containerd[1608]: time="2025-09-11T00:19:50.313951830Z" level=info msg="connecting to shim 78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503" address="unix:///run/containerd/s/9fa66f134d0eae4be0a76674d288361379dc9d1b3dee866493b805c94dbf0487" protocol=ttrpc version=3 Sep 11 00:19:50.330246 containerd[1608]: time="2025-09-11T00:19:50.328868634Z" level=info msg="CreateContainer within sandbox \"017ff0820144b9876660d5d0ef408d3daaf4811c870f80a235a5acdb1e0292a4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8\"" Sep 11 00:19:50.330246 containerd[1608]: time="2025-09-11T00:19:50.329623131Z" level=info msg="StartContainer for \"4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8\"" Sep 11 00:19:50.330501 containerd[1608]: time="2025-09-11T00:19:50.330436668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb\"" Sep 11 00:19:50.333397 kubelet[2465]: E0911 00:19:50.333341 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:50.334711 containerd[1608]: time="2025-09-11T00:19:50.334650001Z" level=info msg="connecting to shim 4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8" address="unix:///run/containerd/s/9aa121cbfc4a5733efa62be2b92bc5a108b6c8afb565ce85f5e7ee338e1b8d6d" protocol=ttrpc version=3 Sep 11 00:19:50.336151 containerd[1608]: time="2025-09-11T00:19:50.335503985Z" level=info msg="CreateContainer within sandbox \"95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:19:50.362971 kubelet[2465]: W0911 00:19:50.362882 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:50.362971 kubelet[2465]: E0911 00:19:50.362981 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:50.367256 kubelet[2465]: E0911 00:19:50.365818 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Sep 11 00:19:50.368109 containerd[1608]: time="2025-09-11T00:19:50.368059917Z" level=info msg="Container c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:19:50.371230 systemd[1]: Started cri-containerd-78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503.scope - libcontainer container 78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503. Sep 11 00:19:50.406872 containerd[1608]: time="2025-09-11T00:19:50.406789633Z" level=info msg="CreateContainer within sandbox \"95541481c9f0e8c321a7e0950141e9000a1419837436903671e7d03b8572c9cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2\"" Sep 11 00:19:50.408378 containerd[1608]: time="2025-09-11T00:19:50.408116886Z" level=info msg="StartContainer for \"c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2\"" Sep 11 00:19:50.411267 containerd[1608]: time="2025-09-11T00:19:50.411208893Z" level=info msg="connecting to shim c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2" address="unix:///run/containerd/s/d1b13d7155e546ece0c9199de674138eba3a0eaacc7c849e8f84ef5cb3106cc0" protocol=ttrpc version=3 Sep 11 00:19:50.412678 kubelet[2465]: W0911 00:19:50.412501 2465 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Sep 11 00:19:50.412678 kubelet[2465]: E0911 00:19:50.412621 2465 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:19:50.416318 systemd[1]: Started cri-containerd-4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8.scope - libcontainer container 4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8. Sep 11 00:19:50.456227 systemd[1]: Started cri-containerd-c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2.scope - libcontainer container c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2. Sep 11 00:19:50.551137 containerd[1608]: time="2025-09-11T00:19:50.550292279Z" level=info msg="StartContainer for \"78f71a0024bd9709e22afc5c575f1c1011a7403debfe2a1c7f64dc06d8558503\" returns successfully" Sep 11 00:19:50.562974 containerd[1608]: time="2025-09-11T00:19:50.562918101Z" level=info msg="StartContainer for \"c23b9e3672b1a46afb022bf59ece65e9a055a5678cfabc9f197c14c2037d83a2\" returns successfully" Sep 11 00:19:50.636943 kubelet[2465]: E0911 00:19:50.636727 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412672c9dcb6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:19:48.92080625 +0000 UTC m=+0.651719366,LastTimestamp:2025-09-11 00:19:48.92080625 +0000 UTC m=+0.651719366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:19:50.710076 containerd[1608]: time="2025-09-11T00:19:50.709862731Z" level=info msg="StartContainer for \"4d12411bce2efabfaf9193853c9d14ff723bc1b31f606dc7ee64199add45b9e8\" returns successfully" Sep 11 00:19:50.740354 kubelet[2465]: I0911 00:19:50.738062 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:19:50.740354 kubelet[2465]: E0911 00:19:50.738505 2465 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Sep 11 00:19:51.195869 kubelet[2465]: E0911 00:19:51.195812 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:51.199957 kubelet[2465]: E0911 00:19:51.199930 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:51.204554 kubelet[2465]: E0911 00:19:51.204518 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:52.211299 kubelet[2465]: E0911 00:19:52.209773 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:52.340869 kubelet[2465]: I0911 00:19:52.340780 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:19:53.212586 kubelet[2465]: E0911 00:19:53.211659 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:55.293159 kubelet[2465]: E0911 00:19:55.293101 2465 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:19:55.410345 kubelet[2465]: I0911 00:19:55.409760 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:19:55.916838 kubelet[2465]: I0911 00:19:55.916377 2465 apiserver.go:52] "Watching apiserver" Sep 11 00:19:55.936798 kubelet[2465]: I0911 00:19:55.936691 2465 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:19:58.879234 systemd[1]: Reload requested from client PID 2742 ('systemctl') (unit session-9.scope)... Sep 11 00:19:58.880353 systemd[1]: Reloading... Sep 11 00:19:59.115909 zram_generator::config[2781]: No configuration found. Sep 11 00:19:59.117225 kubelet[2465]: E0911 00:19:59.117119 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:59.244137 kubelet[2465]: E0911 00:19:59.243558 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:59.493341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:19:59.800105 systemd[1]: Reloading finished in 915 ms. Sep 11 00:19:59.847016 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:19:59.881082 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:19:59.881500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:19:59.881575 systemd[1]: kubelet.service: Consumed 1.499s CPU time, 131.9M memory peak. Sep 11 00:19:59.894572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:00.310657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:00.324404 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:20:00.422830 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:00.422830 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:20:00.422830 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:00.423448 kubelet[2832]: I0911 00:20:00.422976 2832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:20:00.442380 kubelet[2832]: I0911 00:20:00.441713 2832 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:20:00.442380 kubelet[2832]: I0911 00:20:00.441804 2832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:20:00.445142 kubelet[2832]: I0911 00:20:00.445076 2832 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:20:00.450116 kubelet[2832]: I0911 00:20:00.449743 2832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:20:00.485215 kubelet[2832]: I0911 00:20:00.454741 2832 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:20:00.516239 kubelet[2832]: I0911 00:20:00.515619 2832 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:20:00.527938 kubelet[2832]: I0911 00:20:00.525678 2832 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:20:00.527938 kubelet[2832]: I0911 00:20:00.525980 2832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:20:00.527938 kubelet[2832]: I0911 00:20:00.526148 2832 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:20:00.527938 kubelet[2832]: I0911 00:20:00.526192 2832 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526466 2832 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526480 2832 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526531 2832 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526690 2832 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526710 2832 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526766 2832 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:20:00.528411 kubelet[2832]: I0911 00:20:00.526783 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:20:00.532259 kubelet[2832]: I0911 00:20:00.530807 2832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:20:00.532259 kubelet[2832]: I0911 00:20:00.531677 2832 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:20:00.532817 kubelet[2832]: I0911 00:20:00.532797 2832 server.go:1274] "Started kubelet" Sep 11 00:20:00.535693 kubelet[2832]: I0911 00:20:00.535663 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:20:00.547374 kubelet[2832]: I0911 00:20:00.547285 2832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:20:00.552001 kubelet[2832]: I0911 00:20:00.551951 2832 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:20:00.554280 kubelet[2832]: I0911 00:20:00.554241 2832 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:20:00.555143 kubelet[2832]: E0911 00:20:00.555115 2832 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:20:00.558491 kubelet[2832]: I0911 00:20:00.556633 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:20:00.559979 kubelet[2832]: I0911 00:20:00.559636 2832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:20:00.569836 kubelet[2832]: I0911 00:20:00.559869 2832 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:20:00.570006 kubelet[2832]: I0911 00:20:00.560422 2832 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:20:00.571415 kubelet[2832]: I0911 00:20:00.560704 2832 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:20:00.580466 kubelet[2832]: I0911 00:20:00.580414 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:20:00.585297 kubelet[2832]: I0911 00:20:00.585245 2832 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:20:00.585297 kubelet[2832]: I0911 00:20:00.585277 2832 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:20:00.591050 kubelet[2832]: E0911 00:20:00.590836 2832 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:20:00.591504 kubelet[2832]: I0911 00:20:00.591470 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:20:00.593567 kubelet[2832]: I0911 00:20:00.593531 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:20:00.593716 kubelet[2832]: I0911 00:20:00.593700 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:20:00.593818 kubelet[2832]: I0911 00:20:00.593801 2832 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:20:00.594053 kubelet[2832]: E0911 00:20:00.594018 2832 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:20:00.598097 sudo[2855]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 11 00:20:00.598635 sudo[2855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 11 00:20:00.670985 kubelet[2832]: I0911 00:20:00.670927 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:20:00.670985 kubelet[2832]: I0911 00:20:00.670951 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:20:00.670985 kubelet[2832]: I0911 00:20:00.670974 2832 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:00.671274 kubelet[2832]: I0911 00:20:00.671164 2832 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:20:00.671274 kubelet[2832]: I0911 00:20:00.671177 2832 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:20:00.671274 kubelet[2832]: I0911 00:20:00.671200 2832 policy_none.go:49] "None policy: Start" Sep 11 00:20:00.672967 kubelet[2832]: I0911 00:20:00.672237 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:20:00.672967 kubelet[2832]: I0911 00:20:00.672283 2832 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:20:00.672967 kubelet[2832]: I0911 00:20:00.672702 2832 state_mem.go:75] "Updated machine memory state" Sep 11 00:20:00.682607 kubelet[2832]: I0911 00:20:00.682552 2832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:20:00.682837 kubelet[2832]: I0911 00:20:00.682809 2832 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:20:00.682936 kubelet[2832]: I0911 00:20:00.682824 2832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:20:00.683191 kubelet[2832]: I0911 00:20:00.683160 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:20:00.723450 kubelet[2832]: E0911 00:20:00.723311 2832 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.771874 kubelet[2832]: I0911 00:20:00.771507 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:20:00.771874 kubelet[2832]: I0911 00:20:00.771571 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.771874 kubelet[2832]: I0911 00:20:00.771607 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.771874 kubelet[2832]: I0911 00:20:00.771625 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.771874 kubelet[2832]: I0911 00:20:00.771647 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:20:00.772206 kubelet[2832]: I0911 00:20:00.771665 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:20:00.772206 kubelet[2832]: I0911 00:20:00.771683 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9341c4b560676b6356445bf5ac3702f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9341c4b560676b6356445bf5ac3702f6\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:20:00.772206 kubelet[2832]: I0911 00:20:00.771703 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.772206 kubelet[2832]: I0911 00:20:00.771725 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:20:00.800973 kubelet[2832]: I0911 00:20:00.799209 2832 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:20:00.826936 kubelet[2832]: I0911 00:20:00.823626 2832 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 00:20:00.826936 kubelet[2832]: I0911 00:20:00.823829 2832 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:20:01.012269 kubelet[2832]: E0911 00:20:01.012174 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.028972 kubelet[2832]: E0911 00:20:01.028448 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.028972 kubelet[2832]: E0911 00:20:01.028792 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.530744 kubelet[2832]: I0911 00:20:01.530349 2832 apiserver.go:52] "Watching apiserver" Sep 11 00:20:01.571148 kubelet[2832]: I0911 00:20:01.571056 2832 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:20:01.638171 kubelet[2832]: E0911 00:20:01.638099 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.640477 kubelet[2832]: E0911 00:20:01.640396 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.677679 kubelet[2832]: E0911 00:20:01.677302 2832 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:20:01.677679 kubelet[2832]: E0911 00:20:01.677578 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:01.700611 kubelet[2832]: I0911 00:20:01.699377 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6993510280000002 podStartE2EDuration="1.699351028s" podCreationTimestamp="2025-09-11 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:01.699091921 +0000 UTC m=+1.366159196" watchObservedRunningTime="2025-09-11 00:20:01.699351028 +0000 UTC m=+1.366418293" Sep 11 00:20:02.295565 kubelet[2832]: I0911 00:20:02.293085 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.293026293 podStartE2EDuration="2.293026293s" podCreationTimestamp="2025-09-11 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:01.733990215 +0000 UTC m=+1.401057470" watchObservedRunningTime="2025-09-11 00:20:02.293026293 +0000 UTC m=+1.960093548" Sep 11 00:20:02.643886 kubelet[2832]: E0911 00:20:02.640541 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:02.945939 sudo[2855]: pam_unix(sudo:session): session closed for user root Sep 11 00:20:03.738576 kubelet[2832]: I0911 00:20:03.738501 2832 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:20:03.739383 containerd[1608]: time="2025-09-11T00:20:03.739326080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:20:03.739752 kubelet[2832]: I0911 00:20:03.739620 2832 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:20:04.491591 systemd[1]: Created slice kubepods-besteffort-pod48fef8c2_1c84_44be_a959_23e186d07e3a.slice - libcontainer container kubepods-besteffort-pod48fef8c2_1c84_44be_a959_23e186d07e3a.slice. Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537806 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cni-path\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537895 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-net\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537927 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hostproc\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537951 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48fef8c2-1c84-44be-a959-23e186d07e3a-xtables-lock\") pod \"kube-proxy-wj47g\" (UID: \"48fef8c2-1c84-44be-a959-23e186d07e3a\") " pod="kube-system/kube-proxy-wj47g" Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537972 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-cgroup\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.538641 kubelet[2832]: I0911 00:20:04.537998 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-etc-cni-netd\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539025 kubelet[2832]: I0911 00:20:04.538022 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-config-path\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539025 kubelet[2832]: I0911 00:20:04.538047 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48fef8c2-1c84-44be-a959-23e186d07e3a-kube-proxy\") pod \"kube-proxy-wj47g\" (UID: \"48fef8c2-1c84-44be-a959-23e186d07e3a\") " pod="kube-system/kube-proxy-wj47g" Sep 11 00:20:04.539025 kubelet[2832]: I0911 00:20:04.538088 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6hj\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-kube-api-access-4f6hj\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539025 kubelet[2832]: I0911 00:20:04.538124 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-xtables-lock\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539025 kubelet[2832]: I0911 00:20:04.538146 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48fef8c2-1c84-44be-a959-23e186d07e3a-lib-modules\") pod \"kube-proxy-wj47g\" (UID: \"48fef8c2-1c84-44be-a959-23e186d07e3a\") " pod="kube-system/kube-proxy-wj47g" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538172 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqnd\" (UniqueName: \"kubernetes.io/projected/48fef8c2-1c84-44be-a959-23e186d07e3a-kube-api-access-4dqnd\") pod \"kube-proxy-wj47g\" (UID: \"48fef8c2-1c84-44be-a959-23e186d07e3a\") " pod="kube-system/kube-proxy-wj47g" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538208 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-kernel\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538237 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hubble-tls\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538276 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-run\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538302 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-bpf-maps\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539159 kubelet[2832]: I0911 00:20:04.538325 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-lib-modules\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.539322 kubelet[2832]: I0911 00:20:04.538368 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-clustermesh-secrets\") pod \"cilium-59sj6\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " pod="kube-system/cilium-59sj6" Sep 11 00:20:04.602717 systemd[1]: Created slice kubepods-burstable-pod9afb27d6_9ab7_45c3_a0a0_8dd014761ad2.slice - libcontainer container kubepods-burstable-pod9afb27d6_9ab7_45c3_a0a0_8dd014761ad2.slice. Sep 11 00:20:04.815080 systemd[1]: Created slice kubepods-besteffort-podee927f32_ee9a_4e76_9740_f0a984d3929f.slice - libcontainer container kubepods-besteffort-podee927f32_ee9a_4e76_9740_f0a984d3929f.slice. Sep 11 00:20:04.841931 kubelet[2832]: I0911 00:20:04.841310 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee927f32-ee9a-4e76-9740-f0a984d3929f-cilium-config-path\") pod \"cilium-operator-5d85765b45-pwvsq\" (UID: \"ee927f32-ee9a-4e76-9740-f0a984d3929f\") " pod="kube-system/cilium-operator-5d85765b45-pwvsq" Sep 11 00:20:04.841931 kubelet[2832]: I0911 00:20:04.841374 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxx6d\" (UniqueName: \"kubernetes.io/projected/ee927f32-ee9a-4e76-9740-f0a984d3929f-kube-api-access-sxx6d\") pod \"cilium-operator-5d85765b45-pwvsq\" (UID: \"ee927f32-ee9a-4e76-9740-f0a984d3929f\") " pod="kube-system/cilium-operator-5d85765b45-pwvsq" Sep 11 00:20:04.890614 kubelet[2832]: E0911 00:20:04.890551 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:04.895613 containerd[1608]: time="2025-09-11T00:20:04.892272224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj47g,Uid:48fef8c2-1c84-44be-a959-23e186d07e3a,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:04.910834 kubelet[2832]: E0911 00:20:04.909116 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:04.911060 containerd[1608]: time="2025-09-11T00:20:04.910026439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59sj6,Uid:9afb27d6-9ab7-45c3-a0a0-8dd014761ad2,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:05.202639 containerd[1608]: time="2025-09-11T00:20:05.202136955Z" level=info msg="connecting to shim d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:05.216241 containerd[1608]: time="2025-09-11T00:20:05.216143035Z" level=info msg="connecting to shim ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227" address="unix:///run/containerd/s/caf784a433c84e341b2a859a01639df98f6d58b22371763eac4fb16e6692414a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:05.274202 systemd[1]: Started cri-containerd-d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead.scope - libcontainer container d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead. Sep 11 00:20:05.303877 systemd[1]: Started cri-containerd-ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227.scope - libcontainer container ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227. Sep 11 00:20:05.389345 containerd[1608]: time="2025-09-11T00:20:05.389151786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj47g,Uid:48fef8c2-1c84-44be-a959-23e186d07e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227\"" Sep 11 00:20:05.390740 kubelet[2832]: E0911 00:20:05.390699 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:05.395009 containerd[1608]: time="2025-09-11T00:20:05.394883742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59sj6,Uid:9afb27d6-9ab7-45c3-a0a0-8dd014761ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\"" Sep 11 00:20:05.398720 kubelet[2832]: E0911 00:20:05.398657 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:05.400457 containerd[1608]: time="2025-09-11T00:20:05.400402556Z" level=info msg="CreateContainer within sandbox \"ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:20:05.401239 containerd[1608]: time="2025-09-11T00:20:05.401154648Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 11 00:20:05.427563 kubelet[2832]: E0911 00:20:05.427491 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:05.430519 containerd[1608]: time="2025-09-11T00:20:05.430417897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pwvsq,Uid:ee927f32-ee9a-4e76-9740-f0a984d3929f,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:05.431251 containerd[1608]: time="2025-09-11T00:20:05.431191929Z" level=info msg="Container 73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:05.474971 containerd[1608]: time="2025-09-11T00:20:05.474620417Z" level=info msg="CreateContainer within sandbox \"ac3a57183303ad10560ecf0326a53a3b66f5f56d19728b50b12f883e2c524227\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67\"" Sep 11 00:20:05.475975 containerd[1608]: time="2025-09-11T00:20:05.475943891Z" level=info msg="StartContainer for \"73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67\"" Sep 11 00:20:05.478874 containerd[1608]: time="2025-09-11T00:20:05.478506098Z" level=info msg="connecting to shim 73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67" address="unix:///run/containerd/s/caf784a433c84e341b2a859a01639df98f6d58b22371763eac4fb16e6692414a" protocol=ttrpc version=3 Sep 11 00:20:05.573930 containerd[1608]: time="2025-09-11T00:20:05.569350737Z" level=info msg="connecting to shim cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b" address="unix:///run/containerd/s/ce8acc58a7cf5d9d0b59afc00854273afd2be02fe86c6f6f42d194e82c4ede86" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:05.575282 systemd[1]: Started cri-containerd-73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67.scope - libcontainer container 73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67. Sep 11 00:20:05.670496 systemd[1]: Started cri-containerd-cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b.scope - libcontainer container cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b. Sep 11 00:20:05.762328 containerd[1608]: time="2025-09-11T00:20:05.762113263Z" level=info msg="StartContainer for \"73dcac644cdb1fb299ca427ab94790d458c5ce83562be5614bd491b074127d67\" returns successfully" Sep 11 00:20:05.786594 containerd[1608]: time="2025-09-11T00:20:05.786528836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pwvsq,Uid:ee927f32-ee9a-4e76-9740-f0a984d3929f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\"" Sep 11 00:20:05.792882 kubelet[2832]: E0911 00:20:05.792338 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:06.520611 kubelet[2832]: E0911 00:20:06.520541 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:06.686323 kubelet[2832]: E0911 00:20:06.686000 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:06.686323 kubelet[2832]: E0911 00:20:06.686164 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:07.330511 kubelet[2832]: E0911 00:20:07.330250 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:07.386152 kubelet[2832]: I0911 00:20:07.385066 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj47g" podStartSLOduration=3.385038778 podStartE2EDuration="3.385038778s" podCreationTimestamp="2025-09-11 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:06.74630261 +0000 UTC m=+6.413369865" watchObservedRunningTime="2025-09-11 00:20:07.385038778 +0000 UTC m=+7.052106033" Sep 11 00:20:07.688892 kubelet[2832]: E0911 00:20:07.688377 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:07.689458 kubelet[2832]: E0911 00:20:07.689251 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:10.675989 kubelet[2832]: E0911 00:20:10.674970 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:10.696358 kubelet[2832]: E0911 00:20:10.696298 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:14.328896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063228057.mount: Deactivated successfully. Sep 11 00:20:22.720760 containerd[1608]: time="2025-09-11T00:20:22.719603674Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:22.723094 containerd[1608]: time="2025-09-11T00:20:22.722998714Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 11 00:20:22.724692 containerd[1608]: time="2025-09-11T00:20:22.724645126Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:22.729220 containerd[1608]: time="2025-09-11T00:20:22.729129411Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.327860007s" Sep 11 00:20:22.729220 containerd[1608]: time="2025-09-11T00:20:22.729195689Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 11 00:20:22.740185 containerd[1608]: time="2025-09-11T00:20:22.740100491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 11 00:20:22.760934 containerd[1608]: time="2025-09-11T00:20:22.760103585Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:20:22.796565 containerd[1608]: time="2025-09-11T00:20:22.795359854Z" level=info msg="Container aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:22.815087 containerd[1608]: time="2025-09-11T00:20:22.815001580Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\"" Sep 11 00:20:22.819287 containerd[1608]: time="2025-09-11T00:20:22.817555926Z" level=info msg="StartContainer for \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\"" Sep 11 00:20:22.819287 containerd[1608]: time="2025-09-11T00:20:22.818875184Z" level=info msg="connecting to shim aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" protocol=ttrpc version=3 Sep 11 00:20:22.974217 systemd[1]: Started cri-containerd-aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3.scope - libcontainer container aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3. Sep 11 00:20:23.094473 containerd[1608]: time="2025-09-11T00:20:23.094346385Z" level=info msg="StartContainer for \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" returns successfully" Sep 11 00:20:23.108118 systemd[1]: cri-containerd-aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3.scope: Deactivated successfully. Sep 11 00:20:23.110623 containerd[1608]: time="2025-09-11T00:20:23.110544210Z" level=info msg="received exit event container_id:\"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" id:\"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" pid:3230 exited_at:{seconds:1757550023 nanos:109128317}" Sep 11 00:20:23.110828 containerd[1608]: time="2025-09-11T00:20:23.110794283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" id:\"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" pid:3230 exited_at:{seconds:1757550023 nanos:109128317}" Sep 11 00:20:23.787576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3-rootfs.mount: Deactivated successfully. Sep 11 00:20:23.798532 kubelet[2832]: E0911 00:20:23.795517 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:24.513157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017356712.mount: Deactivated successfully. Sep 11 00:20:24.800874 kubelet[2832]: E0911 00:20:24.800570 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:24.809507 containerd[1608]: time="2025-09-11T00:20:24.809441406Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:20:24.927171 containerd[1608]: time="2025-09-11T00:20:24.925167690Z" level=info msg="Container eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:24.970016 containerd[1608]: time="2025-09-11T00:20:24.969936164Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\"" Sep 11 00:20:24.972472 containerd[1608]: time="2025-09-11T00:20:24.972430296Z" level=info msg="StartContainer for \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\"" Sep 11 00:20:24.974046 containerd[1608]: time="2025-09-11T00:20:24.974013080Z" level=info msg="connecting to shim eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" protocol=ttrpc version=3 Sep 11 00:20:25.038283 systemd[1]: Started cri-containerd-eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74.scope - libcontainer container eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74. Sep 11 00:20:25.170269 containerd[1608]: time="2025-09-11T00:20:25.170101815Z" level=info msg="StartContainer for \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" returns successfully" Sep 11 00:20:25.208742 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:20:25.215192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:20:25.218027 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:20:25.224215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:20:25.234882 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:20:25.243840 systemd[1]: cri-containerd-eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74.scope: Deactivated successfully. Sep 11 00:20:25.252411 containerd[1608]: time="2025-09-11T00:20:25.251835528Z" level=info msg="received exit event container_id:\"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" id:\"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" pid:3286 exited_at:{seconds:1757550025 nanos:251310906}" Sep 11 00:20:25.252411 containerd[1608]: time="2025-09-11T00:20:25.252095960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" id:\"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" pid:3286 exited_at:{seconds:1757550025 nanos:251310906}" Sep 11 00:20:25.343834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:20:25.822053 kubelet[2832]: E0911 00:20:25.821473 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:25.828644 containerd[1608]: time="2025-09-11T00:20:25.828539034Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:20:25.861252 containerd[1608]: time="2025-09-11T00:20:25.861173690Z" level=info msg="Container 9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:25.914891 containerd[1608]: time="2025-09-11T00:20:25.910791177Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\"" Sep 11 00:20:25.914891 containerd[1608]: time="2025-09-11T00:20:25.913648094Z" level=info msg="StartContainer for \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\"" Sep 11 00:20:25.917940 containerd[1608]: time="2025-09-11T00:20:25.917870745Z" level=info msg="connecting to shim 9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" protocol=ttrpc version=3 Sep 11 00:20:25.921557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74-rootfs.mount: Deactivated successfully. Sep 11 00:20:25.984940 systemd[1]: Started cri-containerd-9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3.scope - libcontainer container 9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3. Sep 11 00:20:26.092123 systemd[1]: cri-containerd-9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3.scope: Deactivated successfully. Sep 11 00:20:26.102394 containerd[1608]: time="2025-09-11T00:20:26.101597777Z" level=info msg="received exit event container_id:\"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" id:\"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" pid:3337 exited_at:{seconds:1757550026 nanos:101204419}" Sep 11 00:20:26.102394 containerd[1608]: time="2025-09-11T00:20:26.101809274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" id:\"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" pid:3337 exited_at:{seconds:1757550026 nanos:101204419}" Sep 11 00:20:26.103913 containerd[1608]: time="2025-09-11T00:20:26.103859955Z" level=info msg="StartContainer for \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" returns successfully" Sep 11 00:20:26.141628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3-rootfs.mount: Deactivated successfully. Sep 11 00:20:26.480044 containerd[1608]: time="2025-09-11T00:20:26.479664954Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:26.483794 containerd[1608]: time="2025-09-11T00:20:26.482796938Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 11 00:20:26.486285 containerd[1608]: time="2025-09-11T00:20:26.486054794Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:26.488065 containerd[1608]: time="2025-09-11T00:20:26.487984752Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.747508014s" Sep 11 00:20:26.488065 containerd[1608]: time="2025-09-11T00:20:26.488026943Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 11 00:20:26.492893 containerd[1608]: time="2025-09-11T00:20:26.492724533Z" level=info msg="CreateContainer within sandbox \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 11 00:20:26.547456 containerd[1608]: time="2025-09-11T00:20:26.547362792Z" level=info msg="Container e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:26.563559 containerd[1608]: time="2025-09-11T00:20:26.563466044Z" level=info msg="CreateContainer within sandbox \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\"" Sep 11 00:20:26.566209 containerd[1608]: time="2025-09-11T00:20:26.565098930Z" level=info msg="StartContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\"" Sep 11 00:20:26.567506 containerd[1608]: time="2025-09-11T00:20:26.567214175Z" level=info msg="connecting to shim e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465" address="unix:///run/containerd/s/ce8acc58a7cf5d9d0b59afc00854273afd2be02fe86c6f6f42d194e82c4ede86" protocol=ttrpc version=3 Sep 11 00:20:26.624052 systemd[1]: Started cri-containerd-e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465.scope - libcontainer container e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465. Sep 11 00:20:26.765471 containerd[1608]: time="2025-09-11T00:20:26.764914059Z" level=info msg="StartContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" returns successfully" Sep 11 00:20:26.848326 kubelet[2832]: E0911 00:20:26.848266 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:26.866682 kubelet[2832]: E0911 00:20:26.865736 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:26.875505 containerd[1608]: time="2025-09-11T00:20:26.875368527Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:20:26.909582 containerd[1608]: time="2025-09-11T00:20:26.909525729Z" level=info msg="Container 193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:26.941362 containerd[1608]: time="2025-09-11T00:20:26.940196807Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\"" Sep 11 00:20:26.943619 containerd[1608]: time="2025-09-11T00:20:26.941811808Z" level=info msg="StartContainer for \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\"" Sep 11 00:20:26.943619 containerd[1608]: time="2025-09-11T00:20:26.943080462Z" level=info msg="connecting to shim 193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" protocol=ttrpc version=3 Sep 11 00:20:27.017627 systemd[1]: Started cri-containerd-193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781.scope - libcontainer container 193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781. Sep 11 00:20:27.127655 systemd[1]: cri-containerd-193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781.scope: Deactivated successfully. Sep 11 00:20:27.132140 containerd[1608]: time="2025-09-11T00:20:27.132071679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" id:\"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" pid:3413 exited_at:{seconds:1757550027 nanos:130496788}" Sep 11 00:20:27.141504 containerd[1608]: time="2025-09-11T00:20:27.141298071Z" level=info msg="received exit event container_id:\"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" id:\"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" pid:3413 exited_at:{seconds:1757550027 nanos:130496788}" Sep 11 00:20:27.151237 containerd[1608]: time="2025-09-11T00:20:27.151072248Z" level=info msg="StartContainer for \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" returns successfully" Sep 11 00:20:27.221898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781-rootfs.mount: Deactivated successfully. Sep 11 00:20:27.894875 kubelet[2832]: E0911 00:20:27.894760 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:27.897493 kubelet[2832]: E0911 00:20:27.895802 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:27.903957 containerd[1608]: time="2025-09-11T00:20:27.902350512Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:20:27.950946 kubelet[2832]: I0911 00:20:27.949549 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pwvsq" podStartSLOduration=3.252987549 podStartE2EDuration="23.949511983s" podCreationTimestamp="2025-09-11 00:20:04 +0000 UTC" firstStartedPulling="2025-09-11 00:20:05.793090879 +0000 UTC m=+5.460158134" lastFinishedPulling="2025-09-11 00:20:26.489615303 +0000 UTC m=+26.156682568" observedRunningTime="2025-09-11 00:20:26.973000141 +0000 UTC m=+26.640067417" watchObservedRunningTime="2025-09-11 00:20:27.949511983 +0000 UTC m=+27.616579238" Sep 11 00:20:27.976064 containerd[1608]: time="2025-09-11T00:20:27.975990638Z" level=info msg="Container c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:28.000248 containerd[1608]: time="2025-09-11T00:20:28.000148936Z" level=info msg="CreateContainer within sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\"" Sep 11 00:20:28.006782 containerd[1608]: time="2025-09-11T00:20:28.006429102Z" level=info msg="StartContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\"" Sep 11 00:20:28.009949 containerd[1608]: time="2025-09-11T00:20:28.009870422Z" level=info msg="connecting to shim c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923" address="unix:///run/containerd/s/69b9858642cb6cb45027362a35ddbf0ce867813352d3ba96cdc28fdf7132916a" protocol=ttrpc version=3 Sep 11 00:20:28.061249 systemd[1]: Started cri-containerd-c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923.scope - libcontainer container c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923. Sep 11 00:20:28.138060 containerd[1608]: time="2025-09-11T00:20:28.137837784Z" level=info msg="StartContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" returns successfully" Sep 11 00:20:28.307359 containerd[1608]: time="2025-09-11T00:20:28.307197747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"b9584d472a2ef0ffbcd521b79ce3d218fe66306d94fa080b477a016997dc7357\" pid:3483 exited_at:{seconds:1757550028 nanos:306523089}" Sep 11 00:20:28.389137 kubelet[2832]: I0911 00:20:28.389046 2832 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 00:20:28.458481 kubelet[2832]: I0911 00:20:28.458437 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6d2\" (UniqueName: \"kubernetes.io/projected/3da6d88f-1461-4fa7-aa8c-8e3ce26ac902-kube-api-access-gs6d2\") pod \"coredns-7c65d6cfc9-r4dkj\" (UID: \"3da6d88f-1461-4fa7-aa8c-8e3ce26ac902\") " pod="kube-system/coredns-7c65d6cfc9-r4dkj" Sep 11 00:20:28.458781 kubelet[2832]: I0911 00:20:28.458680 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fca56c-ffef-4328-b9f5-5d1f5eb9e932-config-volume\") pod \"coredns-7c65d6cfc9-9dkmv\" (UID: \"07fca56c-ffef-4328-b9f5-5d1f5eb9e932\") " pod="kube-system/coredns-7c65d6cfc9-9dkmv" Sep 11 00:20:28.458781 kubelet[2832]: I0911 00:20:28.458702 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da6d88f-1461-4fa7-aa8c-8e3ce26ac902-config-volume\") pod \"coredns-7c65d6cfc9-r4dkj\" (UID: \"3da6d88f-1461-4fa7-aa8c-8e3ce26ac902\") " pod="kube-system/coredns-7c65d6cfc9-r4dkj" Sep 11 00:20:28.458781 kubelet[2832]: I0911 00:20:28.458720 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gw4q\" (UniqueName: \"kubernetes.io/projected/07fca56c-ffef-4328-b9f5-5d1f5eb9e932-kube-api-access-2gw4q\") pod \"coredns-7c65d6cfc9-9dkmv\" (UID: \"07fca56c-ffef-4328-b9f5-5d1f5eb9e932\") " pod="kube-system/coredns-7c65d6cfc9-9dkmv" Sep 11 00:20:28.465610 systemd[1]: Created slice kubepods-burstable-pod3da6d88f_1461_4fa7_aa8c_8e3ce26ac902.slice - libcontainer container kubepods-burstable-pod3da6d88f_1461_4fa7_aa8c_8e3ce26ac902.slice. Sep 11 00:20:28.511245 systemd[1]: Created slice kubepods-burstable-pod07fca56c_ffef_4328_b9f5_5d1f5eb9e932.slice - libcontainer container kubepods-burstable-pod07fca56c_ffef_4328_b9f5_5d1f5eb9e932.slice. Sep 11 00:20:28.778460 kubelet[2832]: E0911 00:20:28.778053 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:28.780526 containerd[1608]: time="2025-09-11T00:20:28.780467809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r4dkj,Uid:3da6d88f-1461-4fa7-aa8c-8e3ce26ac902,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:28.819865 kubelet[2832]: E0911 00:20:28.819766 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:28.820520 containerd[1608]: time="2025-09-11T00:20:28.820482019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dkmv,Uid:07fca56c-ffef-4328-b9f5-5d1f5eb9e932,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:29.083000 kubelet[2832]: E0911 00:20:29.079804 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:29.147614 kubelet[2832]: I0911 00:20:29.146311 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-59sj6" podStartSLOduration=7.813067928 podStartE2EDuration="25.14627351s" podCreationTimestamp="2025-09-11 00:20:04 +0000 UTC" firstStartedPulling="2025-09-11 00:20:05.4004416 +0000 UTC m=+5.067508855" lastFinishedPulling="2025-09-11 00:20:22.733647182 +0000 UTC m=+22.400714437" observedRunningTime="2025-09-11 00:20:29.135403872 +0000 UTC m=+28.802471127" watchObservedRunningTime="2025-09-11 00:20:29.14627351 +0000 UTC m=+28.813340765" Sep 11 00:20:29.889673 containerd[1608]: time="2025-09-11T00:20:29.889575797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"924c20837f7ceb152b10112177605d2d87e9a998c126a00ce4fbe0973e64fff6\" pid:3587 exit_status:1 exited_at:{seconds:1757550029 nanos:888513144}" Sep 11 00:20:30.090275 kubelet[2832]: E0911 00:20:30.089698 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:31.099606 kubelet[2832]: E0911 00:20:31.096889 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:31.361536 systemd-networkd[1527]: cilium_host: Link UP Sep 11 00:20:31.362835 systemd-networkd[1527]: cilium_net: Link UP Sep 11 00:20:31.363751 systemd-networkd[1527]: cilium_host: Gained carrier Sep 11 00:20:31.369107 systemd-networkd[1527]: cilium_net: Gained carrier Sep 11 00:20:31.660457 systemd-networkd[1527]: cilium_vxlan: Link UP Sep 11 00:20:31.660476 systemd-networkd[1527]: cilium_vxlan: Gained carrier Sep 11 00:20:31.726212 systemd-networkd[1527]: cilium_host: Gained IPv6LL Sep 11 00:20:32.075911 kernel: NET: Registered PF_ALG protocol family Sep 11 00:20:32.153946 containerd[1608]: time="2025-09-11T00:20:32.153852477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"3e21254076dfe6d62399e1bcc22f0a305990778e87b1faffe06e76e8edcbd223\" pid:3718 exit_status:1 exited_at:{seconds:1757550032 nanos:152407014}" Sep 11 00:20:32.343622 systemd-networkd[1527]: cilium_net: Gained IPv6LL Sep 11 00:20:33.175474 systemd-networkd[1527]: cilium_vxlan: Gained IPv6LL Sep 11 00:20:33.620664 systemd-networkd[1527]: lxc_health: Link UP Sep 11 00:20:33.635445 systemd-networkd[1527]: lxc_health: Gained carrier Sep 11 00:20:33.934036 kernel: eth0: renamed from tmp10dad Sep 11 00:20:33.973439 systemd-networkd[1527]: lxc6cea90411983: Link UP Sep 11 00:20:33.981747 kernel: eth0: renamed from tmp83be7 Sep 11 00:20:33.983624 systemd-networkd[1527]: lxc544090cf1cef: Link UP Sep 11 00:20:33.984162 systemd-networkd[1527]: lxc6cea90411983: Gained carrier Sep 11 00:20:33.984427 systemd-networkd[1527]: lxc544090cf1cef: Gained carrier Sep 11 00:20:34.445117 containerd[1608]: time="2025-09-11T00:20:34.445036847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"22c6d7887f2a1317ec7aa3f189fdd4253656ec9b84e7b7ccfaed61e6b1512e31\" pid:4002 exited_at:{seconds:1757550034 nanos:443284969}" Sep 11 00:20:34.711366 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 11 00:20:34.911877 kubelet[2832]: E0911 00:20:34.911766 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:35.114240 kubelet[2832]: E0911 00:20:35.113252 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:35.606128 systemd-networkd[1527]: lxc6cea90411983: Gained IPv6LL Sep 11 00:20:35.670148 systemd-networkd[1527]: lxc544090cf1cef: Gained IPv6LL Sep 11 00:20:36.117926 kubelet[2832]: E0911 00:20:36.117804 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:36.699039 containerd[1608]: time="2025-09-11T00:20:36.698922084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"8cfe0f5c6d7ec61a4857b93d197e3fee7282254d6a897361fc2a09a17562ba7e\" pid:4038 exited_at:{seconds:1757550036 nanos:698462323}" Sep 11 00:20:38.917167 containerd[1608]: time="2025-09-11T00:20:38.917097777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"13f00f2e1e57817e9cf4bff0105e7c3fde9ad4009661fcc6a4cd21fb1ad1f537\" pid:4066 exited_at:{seconds:1757550038 nanos:916659599}" Sep 11 00:20:39.907456 sudo[1827]: pam_unix(sudo:session): session closed for user root Sep 11 00:20:39.913652 sshd[1826]: Connection closed by 10.0.0.1 port 50064 Sep 11 00:20:39.914896 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:39.923627 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:50064.service: Deactivated successfully. Sep 11 00:20:39.928664 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:20:39.929896 systemd[1]: session-9.scope: Consumed 11.241s CPU time, 233M memory peak. Sep 11 00:20:39.932535 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:20:39.936962 systemd-logind[1579]: Removed session 9. Sep 11 00:20:40.236415 containerd[1608]: time="2025-09-11T00:20:40.235308035Z" level=info msg="connecting to shim 10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a" address="unix:///run/containerd/s/cb37944cf5e02f98ed4b759a6b00fc3478fee66f490cd5499a15a8d05a9bab12" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:40.241232 containerd[1608]: time="2025-09-11T00:20:40.241138245Z" level=info msg="connecting to shim 83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3" address="unix:///run/containerd/s/f54036c8965f67c8d07e38829330379ea60a891c1cd387e1ba4444d3a9a2af36" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:40.311367 systemd[1]: Started cri-containerd-83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3.scope - libcontainer container 83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3. Sep 11 00:20:40.325931 systemd[1]: Started cri-containerd-10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a.scope - libcontainer container 10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a. Sep 11 00:20:40.347141 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:20:40.367402 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:20:40.415133 containerd[1608]: time="2025-09-11T00:20:40.414996969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r4dkj,Uid:3da6d88f-1461-4fa7-aa8c-8e3ce26ac902,Namespace:kube-system,Attempt:0,} returns sandbox id \"83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3\"" Sep 11 00:20:40.416924 kubelet[2832]: E0911 00:20:40.416875 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:40.424762 containerd[1608]: time="2025-09-11T00:20:40.424685211Z" level=info msg="CreateContainer within sandbox \"83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:20:40.445016 containerd[1608]: time="2025-09-11T00:20:40.444794556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dkmv,Uid:07fca56c-ffef-4328-b9f5-5d1f5eb9e932,Namespace:kube-system,Attempt:0,} returns sandbox id \"10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a\"" Sep 11 00:20:40.446248 kubelet[2832]: E0911 00:20:40.446199 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:40.450892 containerd[1608]: time="2025-09-11T00:20:40.450169467Z" level=info msg="CreateContainer within sandbox \"10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:20:40.495145 containerd[1608]: time="2025-09-11T00:20:40.494535363Z" level=info msg="Container ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:40.505639 containerd[1608]: time="2025-09-11T00:20:40.505468463Z" level=info msg="Container 19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:40.510757 containerd[1608]: time="2025-09-11T00:20:40.510665845Z" level=info msg="CreateContainer within sandbox \"10dad510b98cc81dcbadf47a16b97f96c37d20b02d398ffe0f6278aea15c030a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d\"" Sep 11 00:20:40.511949 containerd[1608]: time="2025-09-11T00:20:40.511885155Z" level=info msg="StartContainer for \"ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d\"" Sep 11 00:20:40.516117 containerd[1608]: time="2025-09-11T00:20:40.515407375Z" level=info msg="connecting to shim ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d" address="unix:///run/containerd/s/cb37944cf5e02f98ed4b759a6b00fc3478fee66f490cd5499a15a8d05a9bab12" protocol=ttrpc version=3 Sep 11 00:20:40.538103 containerd[1608]: time="2025-09-11T00:20:40.538045071Z" level=info msg="CreateContainer within sandbox \"83be70b14832aaf874ad046eaaa5ac5582ebaa8ac77575fd9149b7235eeccec3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91\"" Sep 11 00:20:40.543065 containerd[1608]: time="2025-09-11T00:20:40.540744048Z" level=info msg="StartContainer for \"19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91\"" Sep 11 00:20:40.543065 containerd[1608]: time="2025-09-11T00:20:40.542253162Z" level=info msg="connecting to shim 19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91" address="unix:///run/containerd/s/f54036c8965f67c8d07e38829330379ea60a891c1cd387e1ba4444d3a9a2af36" protocol=ttrpc version=3 Sep 11 00:20:40.573226 systemd[1]: Started cri-containerd-ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d.scope - libcontainer container ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d. Sep 11 00:20:40.611469 systemd[1]: Started cri-containerd-19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91.scope - libcontainer container 19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91. Sep 11 00:20:40.662816 containerd[1608]: time="2025-09-11T00:20:40.662528987Z" level=info msg="StartContainer for \"ed2fc867a29f9edc42aeb8dcf89e5a62320fe284e09f9007eba79d526d1afa0d\" returns successfully" Sep 11 00:20:40.697636 containerd[1608]: time="2025-09-11T00:20:40.697554536Z" level=info msg="StartContainer for \"19c2926fd1b8524e947997ae0f3f6efbd06147f309fb6ee93fd774887a632d91\" returns successfully" Sep 11 00:20:41.156815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259435582.mount: Deactivated successfully. Sep 11 00:20:41.158718 kubelet[2832]: E0911 00:20:41.158653 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:41.172026 kubelet[2832]: E0911 00:20:41.170004 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:41.203266 kubelet[2832]: I0911 00:20:41.202047 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9dkmv" podStartSLOduration=37.202014492 podStartE2EDuration="37.202014492s" podCreationTimestamp="2025-09-11 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:41.199751961 +0000 UTC m=+40.866819256" watchObservedRunningTime="2025-09-11 00:20:41.202014492 +0000 UTC m=+40.869081767" Sep 11 00:20:41.295881 kubelet[2832]: I0911 00:20:41.295710 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r4dkj" podStartSLOduration=37.295680809 podStartE2EDuration="37.295680809s" podCreationTimestamp="2025-09-11 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:41.254126563 +0000 UTC m=+40.921193848" watchObservedRunningTime="2025-09-11 00:20:41.295680809 +0000 UTC m=+40.962748064" Sep 11 00:20:42.174164 kubelet[2832]: E0911 00:20:42.173010 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:42.174164 kubelet[2832]: E0911 00:20:42.173658 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:43.184073 kubelet[2832]: E0911 00:20:43.180556 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:20:43.193298 kubelet[2832]: E0911 00:20:43.193015 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:10.215237 update_engine[1586]: I20250911 00:21:10.214250 1586 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 11 00:21:10.215237 update_engine[1586]: I20250911 00:21:10.214967 1586 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 11 00:21:10.217410 update_engine[1586]: I20250911 00:21:10.216826 1586 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 11 00:21:10.219772 update_engine[1586]: I20250911 00:21:10.218693 1586 omaha_request_params.cc:62] Current group set to beta Sep 11 00:21:10.219772 update_engine[1586]: I20250911 00:21:10.218864 1586 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 11 00:21:10.219772 update_engine[1586]: I20250911 00:21:10.218885 1586 update_attempter.cc:643] Scheduling an action processor start. Sep 11 00:21:10.219772 update_engine[1586]: I20250911 00:21:10.218909 1586 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 11 00:21:10.228664 update_engine[1586]: I20250911 00:21:10.228184 1586 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 11 00:21:10.234019 update_engine[1586]: I20250911 00:21:10.228745 1586 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 11 00:21:10.234019 update_engine[1586]: I20250911 00:21:10.228765 1586 omaha_request_action.cc:272] Request: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: Sep 11 00:21:10.234019 update_engine[1586]: I20250911 00:21:10.228777 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:21:10.235079 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 11 00:21:10.238225 update_engine[1586]: I20250911 00:21:10.238147 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:21:10.238709 update_engine[1586]: I20250911 00:21:10.238659 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:21:10.252946 update_engine[1586]: E20250911 00:21:10.251672 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:21:10.252946 update_engine[1586]: I20250911 00:21:10.251832 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 11 00:21:16.440876 kernel: hrtimer: interrupt took 9461890 ns Sep 11 00:21:20.199072 update_engine[1586]: I20250911 00:21:20.198915 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:21:20.199677 update_engine[1586]: I20250911 00:21:20.199326 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:21:20.199723 update_engine[1586]: I20250911 00:21:20.199687 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:21:20.210066 update_engine[1586]: E20250911 00:21:20.209798 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:21:20.210066 update_engine[1586]: I20250911 00:21:20.210073 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 11 00:21:26.600804 kubelet[2832]: E0911 00:21:26.600157 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:28.595343 kubelet[2832]: E0911 00:21:28.595284 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:29.597885 kubelet[2832]: E0911 00:21:29.596033 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:30.199022 update_engine[1586]: I20250911 00:21:30.198086 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:21:30.199022 update_engine[1586]: I20250911 00:21:30.198480 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:21:30.199022 update_engine[1586]: I20250911 00:21:30.198835 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:21:30.213130 update_engine[1586]: E20250911 00:21:30.212882 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:21:30.213130 update_engine[1586]: I20250911 00:21:30.213040 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 11 00:21:30.598544 kubelet[2832]: E0911 00:21:30.597370 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:31.599388 kubelet[2832]: E0911 00:21:31.598255 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:38.605245 kubelet[2832]: E0911 00:21:38.605081 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:40.191319 update_engine[1586]: I20250911 00:21:40.191009 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:21:40.192930 update_engine[1586]: I20250911 00:21:40.192296 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:21:40.192930 update_engine[1586]: I20250911 00:21:40.192715 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:21:40.201648 update_engine[1586]: E20250911 00:21:40.201519 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:21:40.201648 update_engine[1586]: I20250911 00:21:40.201642 1586 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 11 00:21:40.201648 update_engine[1586]: I20250911 00:21:40.201658 1586 omaha_request_action.cc:617] Omaha request response: Sep 11 00:21:40.202021 update_engine[1586]: E20250911 00:21:40.201817 1586 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.201918 1586 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.201933 1586 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.201942 1586 update_attempter.cc:306] Processing Done. Sep 11 00:21:40.202021 update_engine[1586]: E20250911 00:21:40.201967 1586 update_attempter.cc:619] Update failed. Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.201982 1586 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.201992 1586 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 11 00:21:40.202021 update_engine[1586]: I20250911 00:21:40.202000 1586 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 11 00:21:40.202263 update_engine[1586]: I20250911 00:21:40.202098 1586 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 11 00:21:40.202263 update_engine[1586]: I20250911 00:21:40.202135 1586 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 11 00:21:40.202263 update_engine[1586]: I20250911 00:21:40.202147 1586 omaha_request_action.cc:272] Request: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: Sep 11 00:21:40.202263 update_engine[1586]: I20250911 00:21:40.202157 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 11 00:21:40.205776 update_engine[1586]: I20250911 00:21:40.202421 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 11 00:21:40.205776 update_engine[1586]: I20250911 00:21:40.203009 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 11 00:21:40.205899 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 11 00:21:40.215484 update_engine[1586]: E20250911 00:21:40.213830 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.213968 1586 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.213976 1586 omaha_request_action.cc:617] Omaha request response: Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.213999 1586 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.214008 1586 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.214015 1586 update_attempter.cc:306] Processing Done. Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.214025 1586 update_attempter.cc:310] Error event sent. Sep 11 00:21:40.215484 update_engine[1586]: I20250911 00:21:40.214049 1586 update_check_scheduler.cc:74] Next update check in 40m2s Sep 11 00:21:40.215982 locksmithd[1617]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 11 00:21:47.595351 kubelet[2832]: E0911 00:21:47.595259 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:21:52.112400 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:36144.service - OpenSSH per-connection server daemon (10.0.0.1:36144). Sep 11 00:21:52.314216 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 36144 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:21:52.329752 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:52.352984 systemd-logind[1579]: New session 10 of user core. Sep 11 00:21:52.366280 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:21:52.684764 sshd[4290]: Connection closed by 10.0.0.1 port 36144 Sep 11 00:21:52.684073 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:52.695556 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:36144.service: Deactivated successfully. Sep 11 00:21:52.700458 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:21:52.709385 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:21:52.713537 systemd-logind[1579]: Removed session 10. Sep 11 00:21:57.711254 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:36158.service - OpenSSH per-connection server daemon (10.0.0.1:36158). Sep 11 00:21:57.838676 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 36158 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:21:57.844365 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:57.881647 systemd-logind[1579]: New session 11 of user core. Sep 11 00:21:57.895324 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:21:58.290839 sshd[4308]: Connection closed by 10.0.0.1 port 36158 Sep 11 00:21:58.293096 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:58.308024 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:36158.service: Deactivated successfully. Sep 11 00:21:58.316498 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:21:58.328754 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:21:58.332782 systemd-logind[1579]: Removed session 11. Sep 11 00:22:03.324402 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:41992.service - OpenSSH per-connection server daemon (10.0.0.1:41992). Sep 11 00:22:03.459157 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 41992 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:03.463105 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:03.489183 systemd-logind[1579]: New session 12 of user core. Sep 11 00:22:03.504619 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:22:03.742486 sshd[4326]: Connection closed by 10.0.0.1 port 41992 Sep 11 00:22:03.742165 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:03.758609 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:41992.service: Deactivated successfully. Sep 11 00:22:03.763744 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:22:03.768050 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:22:03.774757 systemd-logind[1579]: Removed session 12. Sep 11 00:22:05.596878 kubelet[2832]: E0911 00:22:05.596790 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:08.775368 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Sep 11 00:22:08.917381 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:08.918283 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:08.940017 systemd-logind[1579]: New session 13 of user core. Sep 11 00:22:08.957890 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:22:09.230015 sshd[4345]: Connection closed by 10.0.0.1 port 42008 Sep 11 00:22:09.232322 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:09.239702 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:42008.service: Deactivated successfully. Sep 11 00:22:09.244196 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:22:09.251676 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:22:09.254608 systemd-logind[1579]: Removed session 13. Sep 11 00:22:14.257934 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:52642.service - OpenSSH per-connection server daemon (10.0.0.1:52642). Sep 11 00:22:14.385957 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 52642 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:14.393195 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:14.419077 systemd-logind[1579]: New session 14 of user core. Sep 11 00:22:14.442944 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:22:14.749285 sshd[4361]: Connection closed by 10.0.0.1 port 52642 Sep 11 00:22:14.751506 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:14.782612 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:52642.service: Deactivated successfully. Sep 11 00:22:14.791535 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:22:14.799081 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:22:14.817635 systemd-logind[1579]: Removed session 14. Sep 11 00:22:19.777897 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:52644.service - OpenSSH per-connection server daemon (10.0.0.1:52644). Sep 11 00:22:19.879093 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 52644 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:19.882971 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:19.899342 systemd-logind[1579]: New session 15 of user core. Sep 11 00:22:19.912957 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:22:20.107598 sshd[4377]: Connection closed by 10.0.0.1 port 52644 Sep 11 00:22:20.108738 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:20.117489 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:52644.service: Deactivated successfully. Sep 11 00:22:20.123086 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:22:20.124959 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:22:20.131263 systemd-logind[1579]: Removed session 15. Sep 11 00:22:25.133650 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:57718.service - OpenSSH per-connection server daemon (10.0.0.1:57718). Sep 11 00:22:25.219701 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 57718 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:25.223524 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:25.235062 systemd-logind[1579]: New session 16 of user core. Sep 11 00:22:25.253263 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:22:25.583081 sshd[4393]: Connection closed by 10.0.0.1 port 57718 Sep 11 00:22:25.582244 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:25.597494 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:57718.service: Deactivated successfully. Sep 11 00:22:25.606721 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:22:25.612196 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:22:25.622268 systemd-logind[1579]: Removed session 16. Sep 11 00:22:30.615793 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:39970.service - OpenSSH per-connection server daemon (10.0.0.1:39970). Sep 11 00:22:30.764878 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 39970 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:30.765622 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:30.781575 systemd-logind[1579]: New session 17 of user core. Sep 11 00:22:30.792776 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:22:30.994393 sshd[4409]: Connection closed by 10.0.0.1 port 39970 Sep 11 00:22:30.996172 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:31.005072 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:39970.service: Deactivated successfully. Sep 11 00:22:31.007829 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:22:31.009724 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:22:31.014450 systemd-logind[1579]: Removed session 17. Sep 11 00:22:36.038204 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:39984.service - OpenSSH per-connection server daemon (10.0.0.1:39984). Sep 11 00:22:36.132089 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 39984 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:36.133152 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:36.160712 systemd-logind[1579]: New session 18 of user core. Sep 11 00:22:36.174240 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:22:36.406408 sshd[4427]: Connection closed by 10.0.0.1 port 39984 Sep 11 00:22:36.407780 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:36.431001 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:39984.service: Deactivated successfully. Sep 11 00:22:36.433738 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:22:36.439395 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:22:36.443967 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:39990.service - OpenSSH per-connection server daemon (10.0.0.1:39990). Sep 11 00:22:36.446479 systemd-logind[1579]: Removed session 18. Sep 11 00:22:36.545005 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 39990 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:36.547997 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:36.556213 systemd-logind[1579]: New session 19 of user core. Sep 11 00:22:36.572256 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:22:36.897583 sshd[4443]: Connection closed by 10.0.0.1 port 39990 Sep 11 00:22:36.895682 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:36.913733 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:39990.service: Deactivated successfully. Sep 11 00:22:36.920289 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:22:36.924030 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:22:36.930974 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:40004.service - OpenSSH per-connection server daemon (10.0.0.1:40004). Sep 11 00:22:36.935462 systemd-logind[1579]: Removed session 19. Sep 11 00:22:37.076307 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:37.081269 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:37.095822 systemd-logind[1579]: New session 20 of user core. Sep 11 00:22:37.112285 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:22:37.379078 sshd[4457]: Connection closed by 10.0.0.1 port 40004 Sep 11 00:22:37.379535 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:37.389587 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:40004.service: Deactivated successfully. Sep 11 00:22:37.395673 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:22:37.399261 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:22:37.401159 systemd-logind[1579]: Removed session 20. Sep 11 00:22:41.594834 kubelet[2832]: E0911 00:22:41.594752 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:42.403990 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:60978.service - OpenSSH per-connection server daemon (10.0.0.1:60978). Sep 11 00:22:42.492319 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 60978 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:42.495133 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:42.508646 systemd-logind[1579]: New session 21 of user core. Sep 11 00:22:42.519260 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:22:42.594882 kubelet[2832]: E0911 00:22:42.594790 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:42.671429 sshd[4474]: Connection closed by 10.0.0.1 port 60978 Sep 11 00:22:42.671748 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:42.677888 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:60978.service: Deactivated successfully. Sep 11 00:22:42.680818 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:22:42.682035 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:22:42.684363 systemd-logind[1579]: Removed session 21. Sep 11 00:22:43.596068 kubelet[2832]: E0911 00:22:43.595943 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:47.595109 kubelet[2832]: E0911 00:22:47.595037 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:47.686561 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:60994.service - OpenSSH per-connection server daemon (10.0.0.1:60994). Sep 11 00:22:47.746147 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 60994 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:47.748257 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:47.753263 systemd-logind[1579]: New session 22 of user core. Sep 11 00:22:47.763065 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:22:47.881128 sshd[4490]: Connection closed by 10.0.0.1 port 60994 Sep 11 00:22:47.881487 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:47.885161 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:60994.service: Deactivated successfully. Sep 11 00:22:47.887905 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:22:47.888877 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:22:47.891452 systemd-logind[1579]: Removed session 22. Sep 11 00:22:50.595002 kubelet[2832]: E0911 00:22:50.594929 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:52.902188 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:53686.service - OpenSSH per-connection server daemon (10.0.0.1:53686). Sep 11 00:22:52.959483 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 53686 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:52.961166 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:52.965703 systemd-logind[1579]: New session 23 of user core. Sep 11 00:22:52.975977 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:22:53.091248 sshd[4505]: Connection closed by 10.0.0.1 port 53686 Sep 11 00:22:53.091594 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:53.107736 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:53686.service: Deactivated successfully. Sep 11 00:22:53.109574 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:22:53.110321 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:22:53.113247 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:53690.service - OpenSSH per-connection server daemon (10.0.0.1:53690). Sep 11 00:22:53.114445 systemd-logind[1579]: Removed session 23. Sep 11 00:22:53.175062 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 53690 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:53.176644 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:53.181309 systemd-logind[1579]: New session 24 of user core. Sep 11 00:22:53.195995 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:22:53.465743 sshd[4520]: Connection closed by 10.0.0.1 port 53690 Sep 11 00:22:53.466096 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:53.474729 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:53690.service: Deactivated successfully. Sep 11 00:22:53.476797 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:22:53.477566 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:22:53.480896 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:53702.service - OpenSSH per-connection server daemon (10.0.0.1:53702). Sep 11 00:22:53.481525 systemd-logind[1579]: Removed session 24. Sep 11 00:22:53.538939 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 53702 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:53.540476 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:53.544909 systemd-logind[1579]: New session 25 of user core. Sep 11 00:22:53.563966 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:22:54.595316 kubelet[2832]: E0911 00:22:54.595272 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:22:54.760833 sshd[4534]: Connection closed by 10.0.0.1 port 53702 Sep 11 00:22:54.761522 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:54.771097 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:53702.service: Deactivated successfully. Sep 11 00:22:54.774256 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:22:54.775140 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:22:54.780036 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:53708.service - OpenSSH per-connection server daemon (10.0.0.1:53708). Sep 11 00:22:54.782109 systemd-logind[1579]: Removed session 25. Sep 11 00:22:54.833498 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 53708 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:54.835031 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:54.839324 systemd-logind[1579]: New session 26 of user core. Sep 11 00:22:54.853109 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:22:55.079439 sshd[4554]: Connection closed by 10.0.0.1 port 53708 Sep 11 00:22:55.081578 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:55.091017 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:53708.service: Deactivated successfully. Sep 11 00:22:55.093019 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:22:55.093988 systemd-logind[1579]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:22:55.097313 systemd[1]: Started sshd@26-10.0.0.39:22-10.0.0.1:53712.service - OpenSSH per-connection server daemon (10.0.0.1:53712). Sep 11 00:22:55.098055 systemd-logind[1579]: Removed session 26. Sep 11 00:22:55.161436 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 53712 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:22:55.163683 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:55.168728 systemd-logind[1579]: New session 27 of user core. Sep 11 00:22:55.176108 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 11 00:22:55.287757 sshd[4569]: Connection closed by 10.0.0.1 port 53712 Sep 11 00:22:55.288100 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:55.292399 systemd[1]: sshd@26-10.0.0.39:22-10.0.0.1:53712.service: Deactivated successfully. Sep 11 00:22:55.294281 systemd[1]: session-27.scope: Deactivated successfully. Sep 11 00:22:55.295007 systemd-logind[1579]: Session 27 logged out. Waiting for processes to exit. Sep 11 00:22:55.296614 systemd-logind[1579]: Removed session 27. Sep 11 00:23:00.305879 systemd[1]: Started sshd@27-10.0.0.39:22-10.0.0.1:38252.service - OpenSSH per-connection server daemon (10.0.0.1:38252). Sep 11 00:23:00.368671 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 38252 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:00.370671 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:00.375671 systemd-logind[1579]: New session 28 of user core. Sep 11 00:23:00.383985 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 11 00:23:00.493980 sshd[4585]: Connection closed by 10.0.0.1 port 38252 Sep 11 00:23:00.494308 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:00.498684 systemd[1]: sshd@27-10.0.0.39:22-10.0.0.1:38252.service: Deactivated successfully. Sep 11 00:23:00.501392 systemd[1]: session-28.scope: Deactivated successfully. Sep 11 00:23:00.502356 systemd-logind[1579]: Session 28 logged out. Waiting for processes to exit. Sep 11 00:23:00.504475 systemd-logind[1579]: Removed session 28. Sep 11 00:23:01.595080 kubelet[2832]: E0911 00:23:01.594976 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:05.507343 systemd[1]: Started sshd@28-10.0.0.39:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Sep 11 00:23:05.567467 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:05.569405 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:05.574613 systemd-logind[1579]: New session 29 of user core. Sep 11 00:23:05.582143 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 11 00:23:05.692482 sshd[4605]: Connection closed by 10.0.0.1 port 38256 Sep 11 00:23:05.692836 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:05.697372 systemd[1]: sshd@28-10.0.0.39:22-10.0.0.1:38256.service: Deactivated successfully. Sep 11 00:23:05.699524 systemd[1]: session-29.scope: Deactivated successfully. Sep 11 00:23:05.700363 systemd-logind[1579]: Session 29 logged out. Waiting for processes to exit. Sep 11 00:23:05.701633 systemd-logind[1579]: Removed session 29. Sep 11 00:23:10.706038 systemd[1]: Started sshd@29-10.0.0.39:22-10.0.0.1:52200.service - OpenSSH per-connection server daemon (10.0.0.1:52200). Sep 11 00:23:10.750407 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 52200 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:10.752511 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:10.757719 systemd-logind[1579]: New session 30 of user core. Sep 11 00:23:10.764997 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 11 00:23:10.904256 sshd[4623]: Connection closed by 10.0.0.1 port 52200 Sep 11 00:23:10.904605 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:10.910004 systemd[1]: sshd@29-10.0.0.39:22-10.0.0.1:52200.service: Deactivated successfully. Sep 11 00:23:10.912109 systemd[1]: session-30.scope: Deactivated successfully. Sep 11 00:23:10.913132 systemd-logind[1579]: Session 30 logged out. Waiting for processes to exit. Sep 11 00:23:10.914424 systemd-logind[1579]: Removed session 30. Sep 11 00:23:15.921972 systemd[1]: Started sshd@30-10.0.0.39:22-10.0.0.1:52206.service - OpenSSH per-connection server daemon (10.0.0.1:52206). Sep 11 00:23:15.989250 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:15.991056 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:15.996688 systemd-logind[1579]: New session 31 of user core. Sep 11 00:23:16.006008 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 11 00:23:16.163574 sshd[4638]: Connection closed by 10.0.0.1 port 52206 Sep 11 00:23:16.163974 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:16.169479 systemd[1]: sshd@30-10.0.0.39:22-10.0.0.1:52206.service: Deactivated successfully. Sep 11 00:23:16.171755 systemd[1]: session-31.scope: Deactivated successfully. Sep 11 00:23:16.172648 systemd-logind[1579]: Session 31 logged out. Waiting for processes to exit. Sep 11 00:23:16.174413 systemd-logind[1579]: Removed session 31. Sep 11 00:23:16.595830 kubelet[2832]: E0911 00:23:16.595657 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:21.181153 systemd[1]: Started sshd@31-10.0.0.39:22-10.0.0.1:50684.service - OpenSSH per-connection server daemon (10.0.0.1:50684). Sep 11 00:23:21.240333 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 50684 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:21.912084 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:21.917485 systemd-logind[1579]: New session 32 of user core. Sep 11 00:23:21.927982 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 11 00:23:22.115481 sshd[4653]: Connection closed by 10.0.0.1 port 50684 Sep 11 00:23:22.115969 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:22.129950 systemd[1]: sshd@31-10.0.0.39:22-10.0.0.1:50684.service: Deactivated successfully. Sep 11 00:23:22.132646 systemd[1]: session-32.scope: Deactivated successfully. Sep 11 00:23:22.133551 systemd-logind[1579]: Session 32 logged out. Waiting for processes to exit. Sep 11 00:23:22.138304 systemd[1]: Started sshd@32-10.0.0.39:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Sep 11 00:23:22.139113 systemd-logind[1579]: Removed session 32. Sep 11 00:23:22.191269 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:22.193519 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:22.198945 systemd-logind[1579]: New session 33 of user core. Sep 11 00:23:22.213097 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 11 00:23:24.073827 containerd[1608]: time="2025-09-11T00:23:24.073746614Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:23:24.086507 containerd[1608]: time="2025-09-11T00:23:24.086405783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"ce0bc72793d57e029b39b22e9a31e9e86d0b88a263bcaa3eb5a955a55c306d93\" pid:4688 exited_at:{seconds:1757550204 nanos:85805530}" Sep 11 00:23:24.089812 containerd[1608]: time="2025-09-11T00:23:24.089770167Z" level=info msg="StopContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" with timeout 2 (s)" Sep 11 00:23:24.096671 containerd[1608]: time="2025-09-11T00:23:24.096604067Z" level=info msg="Stop container \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" with signal terminated" Sep 11 00:23:24.103513 containerd[1608]: time="2025-09-11T00:23:24.103433949Z" level=info msg="StopContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" with timeout 30 (s)" Sep 11 00:23:24.104840 containerd[1608]: time="2025-09-11T00:23:24.104642279Z" level=info msg="Stop container \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" with signal terminated" Sep 11 00:23:24.109717 systemd-networkd[1527]: lxc_health: Link DOWN Sep 11 00:23:24.110821 systemd-networkd[1527]: lxc_health: Lost carrier Sep 11 00:23:24.123744 systemd[1]: cri-containerd-e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465.scope: Deactivated successfully. Sep 11 00:23:24.126331 containerd[1608]: time="2025-09-11T00:23:24.126290150Z" level=info msg="received exit event container_id:\"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" id:\"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" pid:3379 exited_at:{seconds:1757550204 nanos:125938070}" Sep 11 00:23:24.126682 containerd[1608]: time="2025-09-11T00:23:24.126647571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" id:\"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" pid:3379 exited_at:{seconds:1757550204 nanos:125938070}" Sep 11 00:23:24.136043 systemd[1]: cri-containerd-c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923.scope: Deactivated successfully. Sep 11 00:23:24.136438 systemd[1]: cri-containerd-c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923.scope: Consumed 11.846s CPU time, 139.9M memory peak, 736K read from disk, 13.3M written to disk. Sep 11 00:23:24.137399 containerd[1608]: time="2025-09-11T00:23:24.137350856Z" level=info msg="received exit event container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" pid:3451 exited_at:{seconds:1757550204 nanos:136946055}" Sep 11 00:23:24.137524 containerd[1608]: time="2025-09-11T00:23:24.137371455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" id:\"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" pid:3451 exited_at:{seconds:1757550204 nanos:136946055}" Sep 11 00:23:24.152869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465-rootfs.mount: Deactivated successfully. Sep 11 00:23:24.161731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923-rootfs.mount: Deactivated successfully. Sep 11 00:23:24.643550 containerd[1608]: time="2025-09-11T00:23:24.643496863Z" level=info msg="StopContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" returns successfully" Sep 11 00:23:24.644218 containerd[1608]: time="2025-09-11T00:23:24.644165255Z" level=info msg="StopPodSandbox for \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\"" Sep 11 00:23:24.644432 containerd[1608]: time="2025-09-11T00:23:24.644241140Z" level=info msg="Container to stop \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.644432 containerd[1608]: time="2025-09-11T00:23:24.644253743Z" level=info msg="Container to stop \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.644432 containerd[1608]: time="2025-09-11T00:23:24.644263752Z" level=info msg="Container to stop \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.644432 containerd[1608]: time="2025-09-11T00:23:24.644272439Z" level=info msg="Container to stop \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.644432 containerd[1608]: time="2025-09-11T00:23:24.644281867Z" level=info msg="Container to stop \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.652292 systemd[1]: cri-containerd-d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead.scope: Deactivated successfully. Sep 11 00:23:24.653177 containerd[1608]: time="2025-09-11T00:23:24.653132135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" id:\"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" pid:2958 exit_status:137 exited_at:{seconds:1757550204 nanos:652732134}" Sep 11 00:23:24.677975 containerd[1608]: time="2025-09-11T00:23:24.677917769Z" level=info msg="StopContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" returns successfully" Sep 11 00:23:24.678591 containerd[1608]: time="2025-09-11T00:23:24.678564020Z" level=info msg="StopPodSandbox for \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\"" Sep 11 00:23:24.678654 containerd[1608]: time="2025-09-11T00:23:24.678643701Z" level=info msg="Container to stop \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:23:24.686319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead-rootfs.mount: Deactivated successfully. Sep 11 00:23:24.687474 systemd[1]: cri-containerd-cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b.scope: Deactivated successfully. Sep 11 00:23:24.710652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b-rootfs.mount: Deactivated successfully. Sep 11 00:23:24.856752 containerd[1608]: time="2025-09-11T00:23:24.855686773Z" level=info msg="TearDown network for sandbox \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" successfully" Sep 11 00:23:24.856752 containerd[1608]: time="2025-09-11T00:23:24.855730416Z" level=info msg="StopPodSandbox for \"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" returns successfully" Sep 11 00:23:24.857338 containerd[1608]: time="2025-09-11T00:23:24.857299262Z" level=info msg="shim disconnected" id=cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b namespace=k8s.io Sep 11 00:23:24.857395 containerd[1608]: time="2025-09-11T00:23:24.857335361Z" level=warning msg="cleaning up after shim disconnected" id=cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b namespace=k8s.io Sep 11 00:23:24.859393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead-shm.mount: Deactivated successfully. Sep 11 00:23:24.876185 containerd[1608]: time="2025-09-11T00:23:24.857349879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:23:24.876396 containerd[1608]: time="2025-09-11T00:23:24.857463155Z" level=info msg="shim disconnected" id=d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead namespace=k8s.io Sep 11 00:23:24.876396 containerd[1608]: time="2025-09-11T00:23:24.860568446Z" level=info msg="received exit event sandbox_id:\"d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead\" exit_status:137 exited_at:{seconds:1757550204 nanos:652732134}" Sep 11 00:23:24.876529 containerd[1608]: time="2025-09-11T00:23:24.876360429Z" level=warning msg="cleaning up after shim disconnected" id=d41aff66245f3dbc5eec0a2923897b906ba18a0fde86bef73d527fdc7f7f6ead namespace=k8s.io Sep 11 00:23:24.876589 containerd[1608]: time="2025-09-11T00:23:24.876524151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:23:24.906057 containerd[1608]: time="2025-09-11T00:23:24.905402165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" id:\"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" pid:3035 exit_status:137 exited_at:{seconds:1757550204 nanos:687593729}" Sep 11 00:23:24.906057 containerd[1608]: time="2025-09-11T00:23:24.905570656Z" level=info msg="received exit event sandbox_id:\"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" exit_status:137 exited_at:{seconds:1757550204 nanos:687593729}" Sep 11 00:23:24.906057 containerd[1608]: time="2025-09-11T00:23:24.905830550Z" level=info msg="TearDown network for sandbox \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" successfully" Sep 11 00:23:24.906057 containerd[1608]: time="2025-09-11T00:23:24.905932434Z" level=info msg="StopPodSandbox for \"cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b\" returns successfully" Sep 11 00:23:24.910675 kubelet[2832]: I0911 00:23:24.910630 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-cgroup\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912294 kubelet[2832]: I0911 00:23:24.910936 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912257 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-net\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912299 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912411 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-clustermesh-secrets\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912437 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cni-path\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912457 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hostproc\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912503 kubelet[2832]: I0911 00:23:24.912475 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-bpf-maps\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912518 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hubble-tls\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912539 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-lib-modules\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912569 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-run\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912591 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-etc-cni-netd\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912623 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f6hj\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-kube-api-access-4f6hj\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912715 kubelet[2832]: I0911 00:23:24.912646 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-config-path\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912671 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-xtables-lock\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912690 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-kernel\") pod \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\" (UID: \"9afb27d6-9ab7-45c3-a0a0-8dd014761ad2\") " Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912740 2832 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912755 2832 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912790 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.912933 kubelet[2832]: I0911 00:23:24.912811 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.913132 kubelet[2832]: I0911 00:23:24.912822 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.913132 kubelet[2832]: I0911 00:23:24.912830 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.913132 kubelet[2832]: I0911 00:23:24.912872 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cni-path" (OuterVolumeSpecName: "cni-path") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.913132 kubelet[2832]: I0911 00:23:24.912887 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hostproc" (OuterVolumeSpecName: "hostproc") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.913132 kubelet[2832]: I0911 00:23:24.912900 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.915632 kubelet[2832]: I0911 00:23:24.915561 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:23:24.918314 kubelet[2832]: I0911 00:23:24.918268 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 00:23:24.918534 kubelet[2832]: I0911 00:23:24.918504 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:23:24.918944 kubelet[2832]: I0911 00:23:24.918900 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-kube-api-access-4f6hj" (OuterVolumeSpecName: "kube-api-access-4f6hj") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "kube-api-access-4f6hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:23:24.920270 kubelet[2832]: I0911 00:23:24.920239 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" (UID: "9afb27d6-9ab7-45c3-a0a0-8dd014761ad2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:23:25.013558 kubelet[2832]: I0911 00:23:25.013496 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxx6d\" (UniqueName: \"kubernetes.io/projected/ee927f32-ee9a-4e76-9740-f0a984d3929f-kube-api-access-sxx6d\") pod \"ee927f32-ee9a-4e76-9740-f0a984d3929f\" (UID: \"ee927f32-ee9a-4e76-9740-f0a984d3929f\") " Sep 11 00:23:25.013558 kubelet[2832]: I0911 00:23:25.013562 2832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee927f32-ee9a-4e76-9740-f0a984d3929f-cilium-config-path\") pod \"ee927f32-ee9a-4e76-9740-f0a984d3929f\" (UID: \"ee927f32-ee9a-4e76-9740-f0a984d3929f\") " Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013631 2832 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013650 2832 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013666 2832 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013676 2832 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013688 2832 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013699 2832 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013708 2832 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.013816 kubelet[2832]: I0911 00:23:25.013717 2832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f6hj\" (UniqueName: \"kubernetes.io/projected/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-kube-api-access-4f6hj\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.014265 kubelet[2832]: I0911 00:23:25.013728 2832 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.014265 kubelet[2832]: I0911 00:23:25.013737 2832 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.014265 kubelet[2832]: I0911 00:23:25.013747 2832 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.014265 kubelet[2832]: I0911 00:23:25.013756 2832 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.017037 kubelet[2832]: I0911 00:23:25.016997 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee927f32-ee9a-4e76-9740-f0a984d3929f-kube-api-access-sxx6d" (OuterVolumeSpecName: "kube-api-access-sxx6d") pod "ee927f32-ee9a-4e76-9740-f0a984d3929f" (UID: "ee927f32-ee9a-4e76-9740-f0a984d3929f"). InnerVolumeSpecName "kube-api-access-sxx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:23:25.017389 kubelet[2832]: I0911 00:23:25.017360 2832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee927f32-ee9a-4e76-9740-f0a984d3929f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee927f32-ee9a-4e76-9740-f0a984d3929f" (UID: "ee927f32-ee9a-4e76-9740-f0a984d3929f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:23:25.075453 kubelet[2832]: I0911 00:23:25.075331 2832 scope.go:117] "RemoveContainer" containerID="c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923" Sep 11 00:23:25.078006 containerd[1608]: time="2025-09-11T00:23:25.077961834Z" level=info msg="RemoveContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\"" Sep 11 00:23:25.083148 systemd[1]: Removed slice kubepods-burstable-pod9afb27d6_9ab7_45c3_a0a0_8dd014761ad2.slice - libcontainer container kubepods-burstable-pod9afb27d6_9ab7_45c3_a0a0_8dd014761ad2.slice. Sep 11 00:23:25.083301 systemd[1]: kubepods-burstable-pod9afb27d6_9ab7_45c3_a0a0_8dd014761ad2.slice: Consumed 12.044s CPU time, 140.2M memory peak, 756K read from disk, 13.3M written to disk. Sep 11 00:23:25.087370 systemd[1]: Removed slice kubepods-besteffort-podee927f32_ee9a_4e76_9740_f0a984d3929f.slice - libcontainer container kubepods-besteffort-podee927f32_ee9a_4e76_9740_f0a984d3929f.slice. Sep 11 00:23:25.098440 containerd[1608]: time="2025-09-11T00:23:25.098388384Z" level=info msg="RemoveContainer for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" returns successfully" Sep 11 00:23:25.098718 kubelet[2832]: I0911 00:23:25.098682 2832 scope.go:117] "RemoveContainer" containerID="193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781" Sep 11 00:23:25.100143 containerd[1608]: time="2025-09-11T00:23:25.100118788Z" level=info msg="RemoveContainer for \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\"" Sep 11 00:23:25.123398 kubelet[2832]: I0911 00:23:25.123336 2832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxx6d\" (UniqueName: \"kubernetes.io/projected/ee927f32-ee9a-4e76-9740-f0a984d3929f-kube-api-access-sxx6d\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.123398 kubelet[2832]: I0911 00:23:25.123371 2832 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee927f32-ee9a-4e76-9740-f0a984d3929f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:23:25.125811 containerd[1608]: time="2025-09-11T00:23:25.125767779Z" level=info msg="RemoveContainer for \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" returns successfully" Sep 11 00:23:25.126051 kubelet[2832]: I0911 00:23:25.126023 2832 scope.go:117] "RemoveContainer" containerID="9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3" Sep 11 00:23:25.128411 containerd[1608]: time="2025-09-11T00:23:25.128378909Z" level=info msg="RemoveContainer for \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\"" Sep 11 00:23:25.152388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdac2bb80bf2ee8d194fffb8837682dc816faa402371330263a0170d93e1744b-shm.mount: Deactivated successfully. Sep 11 00:23:25.152546 systemd[1]: var-lib-kubelet-pods-ee927f32\x2dee9a\x2d4e76\x2d9740\x2df0a984d3929f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsxx6d.mount: Deactivated successfully. Sep 11 00:23:25.152650 systemd[1]: var-lib-kubelet-pods-9afb27d6\x2d9ab7\x2d45c3\x2da0a0\x2d8dd014761ad2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4f6hj.mount: Deactivated successfully. Sep 11 00:23:25.152755 systemd[1]: var-lib-kubelet-pods-9afb27d6\x2d9ab7\x2d45c3\x2da0a0\x2d8dd014761ad2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 11 00:23:25.152869 systemd[1]: var-lib-kubelet-pods-9afb27d6\x2d9ab7\x2d45c3\x2da0a0\x2d8dd014761ad2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 11 00:23:25.166337 containerd[1608]: time="2025-09-11T00:23:25.166208806Z" level=info msg="RemoveContainer for \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" returns successfully" Sep 11 00:23:25.166520 kubelet[2832]: I0911 00:23:25.166484 2832 scope.go:117] "RemoveContainer" containerID="eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74" Sep 11 00:23:25.168523 containerd[1608]: time="2025-09-11T00:23:25.168490769Z" level=info msg="RemoveContainer for \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\"" Sep 11 00:23:25.244407 containerd[1608]: time="2025-09-11T00:23:25.244363968Z" level=info msg="RemoveContainer for \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" returns successfully" Sep 11 00:23:25.244641 kubelet[2832]: I0911 00:23:25.244611 2832 scope.go:117] "RemoveContainer" containerID="aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3" Sep 11 00:23:25.246583 containerd[1608]: time="2025-09-11T00:23:25.246066860Z" level=info msg="RemoveContainer for \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\"" Sep 11 00:23:25.309872 containerd[1608]: time="2025-09-11T00:23:25.309818747Z" level=info msg="RemoveContainer for \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" returns successfully" Sep 11 00:23:25.310145 kubelet[2832]: I0911 00:23:25.310116 2832 scope.go:117] "RemoveContainer" containerID="c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923" Sep 11 00:23:25.318101 containerd[1608]: time="2025-09-11T00:23:25.310397188Z" level=error msg="ContainerStatus for \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\": not found" Sep 11 00:23:25.318740 kubelet[2832]: E0911 00:23:25.318701 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\": not found" containerID="c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923" Sep 11 00:23:25.318854 kubelet[2832]: I0911 00:23:25.318748 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923"} err="failed to get container status \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4f0a9b22521f5f8c46ce7427b95d77a67dc91b6981dbafb0f7cb91a28573923\": not found" Sep 11 00:23:25.318884 kubelet[2832]: I0911 00:23:25.318836 2832 scope.go:117] "RemoveContainer" containerID="193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781" Sep 11 00:23:25.319062 containerd[1608]: time="2025-09-11T00:23:25.319028627Z" level=error msg="ContainerStatus for \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\": not found" Sep 11 00:23:25.319170 kubelet[2832]: E0911 00:23:25.319144 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\": not found" containerID="193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781" Sep 11 00:23:25.319252 kubelet[2832]: I0911 00:23:25.319169 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781"} err="failed to get container status \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\": rpc error: code = NotFound desc = an error occurred when try to find container \"193354094c32dcb8491dd26d20631a01ed433865da5f08adef8d0a8d95519781\": not found" Sep 11 00:23:25.319252 kubelet[2832]: I0911 00:23:25.319188 2832 scope.go:117] "RemoveContainer" containerID="9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3" Sep 11 00:23:25.319372 containerd[1608]: time="2025-09-11T00:23:25.319329870Z" level=error msg="ContainerStatus for \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\": not found" Sep 11 00:23:25.319455 kubelet[2832]: E0911 00:23:25.319428 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\": not found" containerID="9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3" Sep 11 00:23:25.319493 kubelet[2832]: I0911 00:23:25.319454 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3"} err="failed to get container status \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9183d0f1ac33b5fe46a6d6a6c4e60d21fb395ec9e867b9bdb9e9c50d70cc14f3\": not found" Sep 11 00:23:25.319493 kubelet[2832]: I0911 00:23:25.319472 2832 scope.go:117] "RemoveContainer" containerID="eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74" Sep 11 00:23:25.319628 containerd[1608]: time="2025-09-11T00:23:25.319599474Z" level=error msg="ContainerStatus for \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\": not found" Sep 11 00:23:25.319773 kubelet[2832]: E0911 00:23:25.319686 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\": not found" containerID="eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74" Sep 11 00:23:25.319773 kubelet[2832]: I0911 00:23:25.319705 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74"} err="failed to get container status \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\": rpc error: code = NotFound desc = an error occurred when try to find container \"eec7c4b66bb283a8cacd0ae08d108b4e9d83b70302e1770d718529daa336ec74\": not found" Sep 11 00:23:25.319773 kubelet[2832]: I0911 00:23:25.319720 2832 scope.go:117] "RemoveContainer" containerID="aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3" Sep 11 00:23:25.319934 containerd[1608]: time="2025-09-11T00:23:25.319892531Z" level=error msg="ContainerStatus for \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\": not found" Sep 11 00:23:25.320056 kubelet[2832]: E0911 00:23:25.320032 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\": not found" containerID="aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3" Sep 11 00:23:25.320096 kubelet[2832]: I0911 00:23:25.320058 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3"} err="failed to get container status \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa0b415dd1c72cc51def9b4be4714850cca114645865e0bc9f1790e3e84729f3\": not found" Sep 11 00:23:25.320096 kubelet[2832]: I0911 00:23:25.320074 2832 scope.go:117] "RemoveContainer" containerID="e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465" Sep 11 00:23:25.321446 containerd[1608]: time="2025-09-11T00:23:25.321420199Z" level=info msg="RemoveContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\"" Sep 11 00:23:25.380027 containerd[1608]: time="2025-09-11T00:23:25.379965916Z" level=info msg="RemoveContainer for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" returns successfully" Sep 11 00:23:25.380333 kubelet[2832]: I0911 00:23:25.380284 2832 scope.go:117] "RemoveContainer" containerID="e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465" Sep 11 00:23:25.380657 containerd[1608]: time="2025-09-11T00:23:25.380612778Z" level=error msg="ContainerStatus for \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\": not found" Sep 11 00:23:25.380943 kubelet[2832]: E0911 00:23:25.380778 2832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\": not found" containerID="e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465" Sep 11 00:23:25.380943 kubelet[2832]: I0911 00:23:25.380818 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465"} err="failed to get container status \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7e6222a499df74e7c10f6daac5caa21eb06450d6a9c3e72920f177b0e4fb465\": not found" Sep 11 00:23:25.612103 sshd[4668]: Connection closed by 10.0.0.1 port 50688 Sep 11 00:23:25.612473 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:25.629206 systemd[1]: sshd@32-10.0.0.39:22-10.0.0.1:50688.service: Deactivated successfully. Sep 11 00:23:25.631108 systemd[1]: session-33.scope: Deactivated successfully. Sep 11 00:23:25.632045 systemd-logind[1579]: Session 33 logged out. Waiting for processes to exit. Sep 11 00:23:25.635572 systemd[1]: Started sshd@33-10.0.0.39:22-10.0.0.1:50696.service - OpenSSH per-connection server daemon (10.0.0.1:50696). Sep 11 00:23:25.636609 systemd-logind[1579]: Removed session 33. Sep 11 00:23:25.691714 sshd[4823]: Accepted publickey for core from 10.0.0.1 port 50696 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:25.693637 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:25.698451 systemd-logind[1579]: New session 34 of user core. Sep 11 00:23:25.708040 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 11 00:23:25.793285 kubelet[2832]: E0911 00:23:25.793203 2832 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:23:26.291660 sshd[4825]: Connection closed by 10.0.0.1 port 50696 Sep 11 00:23:26.292072 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:26.304168 systemd[1]: sshd@33-10.0.0.39:22-10.0.0.1:50696.service: Deactivated successfully. Sep 11 00:23:26.306911 systemd[1]: session-34.scope: Deactivated successfully. Sep 11 00:23:26.308702 systemd-logind[1579]: Session 34 logged out. Waiting for processes to exit. Sep 11 00:23:26.319200 systemd[1]: Started sshd@34-10.0.0.39:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702). Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321508 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="cilium-agent" Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321539 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="mount-cgroup" Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321546 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="mount-bpf-fs" Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321552 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee927f32-ee9a-4e76-9740-f0a984d3929f" containerName="cilium-operator" Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321559 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="apply-sysctl-overwrites" Sep 11 00:23:26.321556 kubelet[2832]: E0911 00:23:26.321566 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="clean-cilium-state" Sep 11 00:23:26.322341 kubelet[2832]: I0911 00:23:26.321593 2832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" containerName="cilium-agent" Sep 11 00:23:26.322341 kubelet[2832]: I0911 00:23:26.321600 2832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee927f32-ee9a-4e76-9740-f0a984d3929f" containerName="cilium-operator" Sep 11 00:23:26.322997 systemd-logind[1579]: Removed session 34. Sep 11 00:23:26.337396 systemd[1]: Created slice kubepods-burstable-pod8f7e022c_c7fe_4cc2_936d_4079af01779c.slice - libcontainer container kubepods-burstable-pod8f7e022c_c7fe_4cc2_936d_4079af01779c.slice. Sep 11 00:23:26.376907 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:26.378783 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:26.383912 systemd-logind[1579]: New session 35 of user core. Sep 11 00:23:26.395172 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 11 00:23:26.432225 kubelet[2832]: I0911 00:23:26.432144 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-bpf-maps\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432225 kubelet[2832]: I0911 00:23:26.432192 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-cni-path\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432225 kubelet[2832]: I0911 00:23:26.432222 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f7e022c-c7fe-4cc2-936d-4079af01779c-hubble-tls\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432225 kubelet[2832]: I0911 00:23:26.432238 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f7e022c-c7fe-4cc2-936d-4079af01779c-clustermesh-secrets\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432256 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-etc-cni-netd\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432308 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f7e022c-c7fe-4cc2-936d-4079af01779c-cilium-config-path\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432332 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f7e022c-c7fe-4cc2-936d-4079af01779c-cilium-ipsec-secrets\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432371 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-cilium-cgroup\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432401 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-cilium-run\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432495 kubelet[2832]: I0911 00:23:26.432414 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-hostproc\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432645 kubelet[2832]: I0911 00:23:26.432429 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-xtables-lock\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432645 kubelet[2832]: I0911 00:23:26.432446 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-host-proc-sys-net\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432645 kubelet[2832]: I0911 00:23:26.432461 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-host-proc-sys-kernel\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432645 kubelet[2832]: I0911 00:23:26.432485 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbnm\" (UniqueName: \"kubernetes.io/projected/8f7e022c-c7fe-4cc2-936d-4079af01779c-kube-api-access-qqbnm\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.432645 kubelet[2832]: I0911 00:23:26.432504 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f7e022c-c7fe-4cc2-936d-4079af01779c-lib-modules\") pod \"cilium-mghhn\" (UID: \"8f7e022c-c7fe-4cc2-936d-4079af01779c\") " pod="kube-system/cilium-mghhn" Sep 11 00:23:26.447745 sshd[4839]: Connection closed by 10.0.0.1 port 50702 Sep 11 00:23:26.448262 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:26.458045 systemd[1]: sshd@34-10.0.0.39:22-10.0.0.1:50702.service: Deactivated successfully. Sep 11 00:23:26.460163 systemd[1]: session-35.scope: Deactivated successfully. Sep 11 00:23:26.461100 systemd-logind[1579]: Session 35 logged out. Waiting for processes to exit. Sep 11 00:23:26.464834 systemd[1]: Started sshd@35-10.0.0.39:22-10.0.0.1:50714.service - OpenSSH per-connection server daemon (10.0.0.1:50714). Sep 11 00:23:26.465713 systemd-logind[1579]: Removed session 35. Sep 11 00:23:26.527933 sshd[4846]: Accepted publickey for core from 10.0.0.1 port 50714 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:23:26.529777 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:23:26.538440 systemd-logind[1579]: New session 36 of user core. Sep 11 00:23:26.559032 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 11 00:23:26.597457 kubelet[2832]: I0911 00:23:26.597401 2832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afb27d6-9ab7-45c3-a0a0-8dd014761ad2" path="/var/lib/kubelet/pods/9afb27d6-9ab7-45c3-a0a0-8dd014761ad2/volumes" Sep 11 00:23:26.598624 kubelet[2832]: I0911 00:23:26.598589 2832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee927f32-ee9a-4e76-9740-f0a984d3929f" path="/var/lib/kubelet/pods/ee927f32-ee9a-4e76-9740-f0a984d3929f/volumes" Sep 11 00:23:26.643496 kubelet[2832]: E0911 00:23:26.643442 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:26.644153 containerd[1608]: time="2025-09-11T00:23:26.644113154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mghhn,Uid:8f7e022c-c7fe-4cc2-936d-4079af01779c,Namespace:kube-system,Attempt:0,}" Sep 11 00:23:26.676277 containerd[1608]: time="2025-09-11T00:23:26.675763981Z" level=info msg="connecting to shim 2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:23:26.706065 systemd[1]: Started cri-containerd-2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1.scope - libcontainer container 2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1. Sep 11 00:23:26.739832 containerd[1608]: time="2025-09-11T00:23:26.739773754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mghhn,Uid:8f7e022c-c7fe-4cc2-936d-4079af01779c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\"" Sep 11 00:23:26.740818 kubelet[2832]: E0911 00:23:26.740788 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:26.743621 containerd[1608]: time="2025-09-11T00:23:26.743556803Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:23:26.754741 containerd[1608]: time="2025-09-11T00:23:26.754688830Z" level=info msg="Container 58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:23:26.764618 containerd[1608]: time="2025-09-11T00:23:26.764547273Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\"" Sep 11 00:23:26.765476 containerd[1608]: time="2025-09-11T00:23:26.765441785Z" level=info msg="StartContainer for \"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\"" Sep 11 00:23:26.766466 containerd[1608]: time="2025-09-11T00:23:26.766433202Z" level=info msg="connecting to shim 58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" protocol=ttrpc version=3 Sep 11 00:23:26.801006 systemd[1]: Started cri-containerd-58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338.scope - libcontainer container 58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338. Sep 11 00:23:26.835931 containerd[1608]: time="2025-09-11T00:23:26.835788698Z" level=info msg="StartContainer for \"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\" returns successfully" Sep 11 00:23:26.847812 systemd[1]: cri-containerd-58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338.scope: Deactivated successfully. Sep 11 00:23:26.849453 containerd[1608]: time="2025-09-11T00:23:26.849383331Z" level=info msg="received exit event container_id:\"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\" id:\"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\" pid:4917 exited_at:{seconds:1757550206 nanos:848836611}" Sep 11 00:23:26.849797 containerd[1608]: time="2025-09-11T00:23:26.849511014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\" id:\"58965614b52c8e72c0640e4a783e8623923d0b63604df22fa2cbf15f1db55338\" pid:4917 exited_at:{seconds:1757550206 nanos:848836611}" Sep 11 00:23:27.086707 kubelet[2832]: E0911 00:23:27.086547 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:27.089322 containerd[1608]: time="2025-09-11T00:23:27.089276762Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:23:27.099114 containerd[1608]: time="2025-09-11T00:23:27.099034591Z" level=info msg="Container 9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:23:27.109374 containerd[1608]: time="2025-09-11T00:23:27.109300055Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\"" Sep 11 00:23:27.110029 containerd[1608]: time="2025-09-11T00:23:27.109993174Z" level=info msg="StartContainer for \"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\"" Sep 11 00:23:27.111294 containerd[1608]: time="2025-09-11T00:23:27.111255907Z" level=info msg="connecting to shim 9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" protocol=ttrpc version=3 Sep 11 00:23:27.143237 systemd[1]: Started cri-containerd-9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9.scope - libcontainer container 9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9. Sep 11 00:23:27.186187 containerd[1608]: time="2025-09-11T00:23:27.186130158Z" level=info msg="StartContainer for \"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\" returns successfully" Sep 11 00:23:27.192081 systemd[1]: cri-containerd-9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9.scope: Deactivated successfully. Sep 11 00:23:27.192727 containerd[1608]: time="2025-09-11T00:23:27.192686175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\" id:\"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\" pid:4963 exited_at:{seconds:1757550207 nanos:192344524}" Sep 11 00:23:27.192908 containerd[1608]: time="2025-09-11T00:23:27.192704169Z" level=info msg="received exit event container_id:\"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\" id:\"9490abe6ec6c6298df34ca87ff132beb355f134869a37db91c07fbe8d0c41ef9\" pid:4963 exited_at:{seconds:1757550207 nanos:192344524}" Sep 11 00:23:28.091155 kubelet[2832]: E0911 00:23:28.091111 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:28.093049 containerd[1608]: time="2025-09-11T00:23:28.093011465Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:23:28.104705 containerd[1608]: time="2025-09-11T00:23:28.104642575Z" level=info msg="Container f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:23:28.118532 containerd[1608]: time="2025-09-11T00:23:28.118471949Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\"" Sep 11 00:23:28.119266 containerd[1608]: time="2025-09-11T00:23:28.119215213Z" level=info msg="StartContainer for \"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\"" Sep 11 00:23:28.121057 containerd[1608]: time="2025-09-11T00:23:28.121023995Z" level=info msg="connecting to shim f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" protocol=ttrpc version=3 Sep 11 00:23:28.148837 systemd[1]: Started cri-containerd-f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1.scope - libcontainer container f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1. Sep 11 00:23:28.197109 containerd[1608]: time="2025-09-11T00:23:28.197053496Z" level=info msg="StartContainer for \"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\" returns successfully" Sep 11 00:23:28.202235 systemd[1]: cri-containerd-f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1.scope: Deactivated successfully. Sep 11 00:23:28.203915 containerd[1608]: time="2025-09-11T00:23:28.203697929Z" level=info msg="received exit event container_id:\"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\" id:\"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\" pid:5008 exited_at:{seconds:1757550208 nanos:203067399}" Sep 11 00:23:28.204047 containerd[1608]: time="2025-09-11T00:23:28.203984634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\" id:\"f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1\" pid:5008 exited_at:{seconds:1757550208 nanos:203067399}" Sep 11 00:23:28.229822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8dc56dbc14051cab697ada9fcca585e2ab7720607165e261d9c895dec7f56c1-rootfs.mount: Deactivated successfully. Sep 11 00:23:29.096989 kubelet[2832]: E0911 00:23:29.096883 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:29.099091 containerd[1608]: time="2025-09-11T00:23:29.099020999Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:23:29.120931 containerd[1608]: time="2025-09-11T00:23:29.120869414Z" level=info msg="Container c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:23:29.130353 containerd[1608]: time="2025-09-11T00:23:29.130289205Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\"" Sep 11 00:23:29.130959 containerd[1608]: time="2025-09-11T00:23:29.130878396Z" level=info msg="StartContainer for \"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\"" Sep 11 00:23:29.131647 containerd[1608]: time="2025-09-11T00:23:29.131620198Z" level=info msg="connecting to shim c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" protocol=ttrpc version=3 Sep 11 00:23:29.155024 systemd[1]: Started cri-containerd-c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b.scope - libcontainer container c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b. Sep 11 00:23:29.196823 systemd[1]: cri-containerd-c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b.scope: Deactivated successfully. Sep 11 00:23:29.260256 containerd[1608]: time="2025-09-11T00:23:29.197425441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\" id:\"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\" pid:5046 exited_at:{seconds:1757550209 nanos:197082228}" Sep 11 00:23:29.359684 containerd[1608]: time="2025-09-11T00:23:29.359454248Z" level=info msg="received exit event container_id:\"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\" id:\"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\" pid:5046 exited_at:{seconds:1757550209 nanos:197082228}" Sep 11 00:23:29.370612 containerd[1608]: time="2025-09-11T00:23:29.370557362Z" level=info msg="StartContainer for \"c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b\" returns successfully" Sep 11 00:23:29.386706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3a5e99a625d30a313b857422c041eaa2c788c86fae4cfca71e2bdec94a5021b-rootfs.mount: Deactivated successfully. Sep 11 00:23:30.102248 kubelet[2832]: E0911 00:23:30.102181 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:30.105592 containerd[1608]: time="2025-09-11T00:23:30.105396877Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:23:30.794291 kubelet[2832]: E0911 00:23:30.794218 2832 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:23:31.045992 containerd[1608]: time="2025-09-11T00:23:31.044926249Z" level=info msg="Container ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:23:31.065401 containerd[1608]: time="2025-09-11T00:23:31.065304279Z" level=info msg="CreateContainer within sandbox \"2397ed506dcae93b6feb394f48508eefa05a6775958344abbb239f48ead6f9e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\"" Sep 11 00:23:31.066266 containerd[1608]: time="2025-09-11T00:23:31.066180816Z" level=info msg="StartContainer for \"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\"" Sep 11 00:23:31.067581 containerd[1608]: time="2025-09-11T00:23:31.067542336Z" level=info msg="connecting to shim ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee" address="unix:///run/containerd/s/97942859548a6958b0de1da5c46caaeab7eaaa2dc607eebfe2a74971bdb277fa" protocol=ttrpc version=3 Sep 11 00:23:31.099226 systemd[1]: Started cri-containerd-ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee.scope - libcontainer container ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee. Sep 11 00:23:31.175952 containerd[1608]: time="2025-09-11T00:23:31.175900075Z" level=info msg="StartContainer for \"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" returns successfully" Sep 11 00:23:31.258991 containerd[1608]: time="2025-09-11T00:23:31.258937836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"511d692da0a8acfc2aaa031f19c3ee44f4d9ebbe1a14970a3d4ca7e3d733312c\" pid:5117 exited_at:{seconds:1757550211 nanos:258543536}" Sep 11 00:23:31.669921 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 11 00:23:32.118366 kubelet[2832]: E0911 00:23:32.118201 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:32.139412 kubelet[2832]: I0911 00:23:32.139033 2832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mghhn" podStartSLOduration=6.139004798 podStartE2EDuration="6.139004798s" podCreationTimestamp="2025-09-11 00:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:23:32.136412649 +0000 UTC m=+211.803479914" watchObservedRunningTime="2025-09-11 00:23:32.139004798 +0000 UTC m=+211.806072053" Sep 11 00:23:33.120170 kubelet[2832]: E0911 00:23:33.120097 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:33.509716 containerd[1608]: time="2025-09-11T00:23:33.509531878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"7d952444aa97bc3d37afe61c090cc0d5da76061146a6666c69bf827747b88575\" pid:5220 exit_status:1 exited_at:{seconds:1757550213 nanos:509091640}" Sep 11 00:23:35.237131 systemd-networkd[1527]: lxc_health: Link UP Sep 11 00:23:35.250409 systemd-networkd[1527]: lxc_health: Gained carrier Sep 11 00:23:35.288584 kubelet[2832]: I0911 00:23:35.288514 2832 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-11T00:23:35Z","lastTransitionTime":"2025-09-11T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 11 00:23:35.683335 containerd[1608]: time="2025-09-11T00:23:35.683271750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"2ec29b21a8921fe97c0570b6c13809425d82cf4ec48280362b8104a9de4e57f2\" pid:5644 exited_at:{seconds:1757550215 nanos:682672962}" Sep 11 00:23:36.645711 kubelet[2832]: E0911 00:23:36.645668 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:36.790314 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 11 00:23:37.128631 kubelet[2832]: E0911 00:23:37.128601 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:37.789124 containerd[1608]: time="2025-09-11T00:23:37.789064615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"eb2c56753fea1aba905fba6739189bec6b2a050d44e1d7e5d3a1052c05269027\" pid:5684 exited_at:{seconds:1757550217 nanos:788665435}" Sep 11 00:23:38.130494 kubelet[2832]: E0911 00:23:38.130465 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:23:39.908650 containerd[1608]: time="2025-09-11T00:23:39.908584805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"619de329d5b5f552149d07fa05f962c3d41727ce9bf00a0277861d75d8a8e2c5\" pid:5714 exited_at:{seconds:1757550219 nanos:908121846}" Sep 11 00:23:42.014582 containerd[1608]: time="2025-09-11T00:23:42.014531642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8341c9ead2d7e595d6e75d709d14b0314ada985b6755ba1000249a0fbd57ee\" id:\"93962b08f5a073c5015bca5607161125a2521f660d61ad2402f5bb3557dc4025\" pid:5738 exited_at:{seconds:1757550222 nanos:13743875}" Sep 11 00:23:42.021505 sshd[4852]: Connection closed by 10.0.0.1 port 50714 Sep 11 00:23:42.021940 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Sep 11 00:23:42.028286 systemd[1]: sshd@35-10.0.0.39:22-10.0.0.1:50714.service: Deactivated successfully. Sep 11 00:23:42.030764 systemd[1]: session-36.scope: Deactivated successfully. Sep 11 00:23:42.032131 systemd-logind[1579]: Session 36 logged out. Waiting for processes to exit. Sep 11 00:23:42.033599 systemd-logind[1579]: Removed session 36.