Sep 5 00:37:32.886480 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:16:03 -00 2025 Sep 5 00:37:32.886505 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:37:32.886516 kernel: BIOS-provided physical RAM map: Sep 5 00:37:32.886523 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:37:32.886529 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:37:32.886536 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:37:32.886544 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:37:32.886550 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:37:32.886561 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:37:32.886568 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:37:32.886575 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 5 00:37:32.886581 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:37:32.886588 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:37:32.886595 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:37:32.886605 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:37:32.886613 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:37:32.886622 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:37:32.886629 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:37:32.886637 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:37:32.886644 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:37:32.886651 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:37:32.886658 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:37:32.886665 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:37:32.886672 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:37:32.886693 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:37:32.886703 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:37:32.886711 kernel: NX (Execute Disable) protection: active Sep 5 00:37:32.886718 kernel: APIC: Static calls initialized Sep 5 00:37:32.886725 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 5 00:37:32.886734 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 5 00:37:32.886742 kernel: extended physical RAM map: Sep 5 00:37:32.886750 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:37:32.886759 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:37:32.886766 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:37:32.886773 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:37:32.886780 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:37:32.886790 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:37:32.886797 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:37:32.886804 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 5 00:37:32.886811 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 5 00:37:32.886822 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 5 00:37:32.886829 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 5 00:37:32.886839 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 5 00:37:32.886847 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:37:32.886854 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:37:32.886862 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:37:32.886869 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:37:32.886877 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:37:32.886884 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:37:32.886891 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:37:32.886899 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:37:32.886908 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:37:32.886916 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:37:32.886934 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:37:32.886942 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:37:32.886949 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:37:32.886966 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:37:32.886983 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:37:32.886993 kernel: efi: EFI v2.7 by EDK II Sep 5 00:37:32.887001 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 5 00:37:32.887009 kernel: random: crng init done Sep 5 00:37:32.887018 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 5 00:37:32.887026 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 5 00:37:32.887038 kernel: secureboot: Secure boot disabled Sep 5 00:37:32.887046 kernel: SMBIOS 2.8 present. Sep 5 00:37:32.887059 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 5 00:37:32.887067 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:37:32.887074 kernel: Hypervisor detected: KVM Sep 5 00:37:32.887082 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:37:32.887089 kernel: kvm-clock: using sched offset of 4142568303 cycles Sep 5 00:37:32.887097 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:37:32.887105 kernel: tsc: Detected 2794.750 MHz processor Sep 5 00:37:32.887113 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:37:32.887120 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:37:32.887131 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 5 00:37:32.887139 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:37:32.887147 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:37:32.887154 kernel: Using GB pages for direct mapping Sep 5 00:37:32.887162 kernel: ACPI: Early table checksum verification disabled Sep 5 00:37:32.887169 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:37:32.887177 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:37:32.887185 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887193 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887202 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:37:32.887210 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887217 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887225 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887233 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:37:32.887240 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:37:32.887248 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:37:32.887255 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:37:32.887265 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:37:32.887273 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:37:32.887280 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:37:32.887288 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:37:32.887295 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:37:32.887303 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:37:32.887310 kernel: No NUMA configuration found Sep 5 00:37:32.887318 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 5 00:37:32.887325 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 5 00:37:32.887333 kernel: Zone ranges: Sep 5 00:37:32.887343 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:37:32.887350 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 5 00:37:32.887358 kernel: Normal empty Sep 5 00:37:32.887365 kernel: Device empty Sep 5 00:37:32.887373 kernel: Movable zone start for each node Sep 5 00:37:32.887389 kernel: Early memory node ranges Sep 5 00:37:32.887396 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:37:32.887404 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:37:32.887414 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:37:32.887424 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 5 00:37:32.887432 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 5 00:37:32.887439 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 5 00:37:32.887446 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 5 00:37:32.887454 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 5 00:37:32.887461 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 5 00:37:32.887471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:37:32.887479 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:37:32.887496 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:37:32.887503 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:37:32.887511 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 5 00:37:32.887519 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 5 00:37:32.887529 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 5 00:37:32.887537 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 5 00:37:32.887545 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 5 00:37:32.887553 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:37:32.887561 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:37:32.887571 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:37:32.887579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:37:32.887586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:37:32.887594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:37:32.887602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:37:32.887610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:37:32.887618 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:37:32.887626 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:37:32.887634 kernel: TSC deadline timer available Sep 5 00:37:32.887644 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:37:32.887652 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:37:32.887660 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:37:32.887668 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:37:32.887675 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:37:32.887703 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:37:32.887711 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:37:32.887719 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:37:32.887726 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:37:32.887738 kernel: kvm-guest: setup PV sched yield Sep 5 00:37:32.887745 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 5 00:37:32.887753 kernel: Booting paravirtualized kernel on KVM Sep 5 00:37:32.887761 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:37:32.887769 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:37:32.887777 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:37:32.887785 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:37:32.887793 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:37:32.887801 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:37:32.887811 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:37:32.887820 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:37:32.887831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:37:32.887839 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:37:32.887847 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:37:32.887855 kernel: Fallback order for Node 0: 0 Sep 5 00:37:32.887863 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 5 00:37:32.887879 kernel: Policy zone: DMA32 Sep 5 00:37:32.887892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:37:32.887910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:37:32.887918 kernel: ftrace: allocating 40099 entries in 157 pages Sep 5 00:37:32.887926 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:37:32.887934 kernel: Dynamic Preempt: voluntary Sep 5 00:37:32.887942 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:37:32.887950 kernel: rcu: RCU event tracing is enabled. Sep 5 00:37:32.887958 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:37:32.887972 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:37:32.887980 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:37:32.887992 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:37:32.887999 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:37:32.888010 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:37:32.888018 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:37:32.888026 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:37:32.888034 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:37:32.888042 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:37:32.888050 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:37:32.888058 kernel: Console: colour dummy device 80x25 Sep 5 00:37:32.888068 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:37:32.888076 kernel: ACPI: Core revision 20240827 Sep 5 00:37:32.888084 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:37:32.888092 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:37:32.888100 kernel: x2apic enabled Sep 5 00:37:32.888108 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:37:32.888116 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:37:32.888124 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:37:32.888132 kernel: kvm-guest: setup PV IPIs Sep 5 00:37:32.888142 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:37:32.888150 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 5 00:37:32.888158 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 5 00:37:32.888166 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:37:32.888174 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:37:32.888182 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:37:32.888190 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:37:32.888198 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:37:32.888208 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:37:32.888216 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:37:32.888223 kernel: active return thunk: retbleed_return_thunk Sep 5 00:37:32.888231 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:37:32.888241 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:37:32.888249 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:37:32.888257 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:37:32.888266 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:37:32.888274 kernel: active return thunk: srso_return_thunk Sep 5 00:37:32.888284 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:37:32.888292 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:37:32.888300 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:37:32.888308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:37:32.888316 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:37:32.888324 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:37:32.888332 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:37:32.888340 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:37:32.888347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:37:32.888357 kernel: landlock: Up and running. Sep 5 00:37:32.888365 kernel: SELinux: Initializing. Sep 5 00:37:32.888373 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:37:32.888389 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:37:32.888397 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:37:32.888405 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:37:32.888413 kernel: ... version: 0 Sep 5 00:37:32.888420 kernel: ... bit width: 48 Sep 5 00:37:32.888428 kernel: ... generic registers: 6 Sep 5 00:37:32.888439 kernel: ... value mask: 0000ffffffffffff Sep 5 00:37:32.888447 kernel: ... max period: 00007fffffffffff Sep 5 00:37:32.888455 kernel: ... fixed-purpose events: 0 Sep 5 00:37:32.888463 kernel: ... event mask: 000000000000003f Sep 5 00:37:32.888470 kernel: signal: max sigframe size: 1776 Sep 5 00:37:32.888478 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:37:32.888488 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:37:32.888496 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:37:32.888504 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:37:32.888515 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:37:32.888523 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:37:32.888530 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:37:32.888538 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 5 00:37:32.888547 kernel: Memory: 2424724K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 5 00:37:32.888555 kernel: devtmpfs: initialized Sep 5 00:37:32.888562 kernel: x86/mm: Memory block size: 128MB Sep 5 00:37:32.888570 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:37:32.888578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:37:32.888589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 5 00:37:32.888597 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:37:32.888605 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 5 00:37:32.888613 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:37:32.888621 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:37:32.888629 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:37:32.888637 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:37:32.888645 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:37:32.888655 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:37:32.888663 kernel: audit: type=2000 audit(1757032649.498:1): state=initialized audit_enabled=0 res=1 Sep 5 00:37:32.888670 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:37:32.888678 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:37:32.888700 kernel: cpuidle: using governor menu Sep 5 00:37:32.888708 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:37:32.888716 kernel: dca service started, version 1.12.1 Sep 5 00:37:32.888724 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 5 00:37:32.888732 kernel: PCI: Using configuration type 1 for base access Sep 5 00:37:32.888743 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:37:32.888750 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:37:32.888758 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:37:32.888766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:37:32.888774 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:37:32.888782 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:37:32.888789 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:37:32.888797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:37:32.888805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:37:32.888815 kernel: ACPI: Interpreter enabled Sep 5 00:37:32.888823 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:37:32.888830 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:37:32.888838 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:37:32.888846 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:37:32.888854 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:37:32.888862 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:37:32.889072 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:37:32.889204 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:37:32.889324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:37:32.889335 kernel: PCI host bridge to bus 0000:00 Sep 5 00:37:32.889485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:37:32.889600 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:37:32.889736 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:37:32.889849 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 5 00:37:32.889962 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 5 00:37:32.890071 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:37:32.890180 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:37:32.890333 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:37:32.890474 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:37:32.890596 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:37:32.890767 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 5 00:37:32.890892 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 5 00:37:32.891012 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:37:32.891198 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:37:32.891324 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 5 00:37:32.891465 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 5 00:37:32.891586 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 5 00:37:32.891742 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:37:32.891874 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 5 00:37:32.891995 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 5 00:37:32.892124 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 5 00:37:32.892264 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:37:32.892394 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 5 00:37:32.892516 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 5 00:37:32.892643 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 5 00:37:32.892786 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 5 00:37:32.892925 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:37:32.893047 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:37:32.893186 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:37:32.893313 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 5 00:37:32.893442 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 5 00:37:32.893599 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:37:32.893746 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 5 00:37:32.893758 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:37:32.893766 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:37:32.893774 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:37:32.893782 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:37:32.893790 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:37:32.893797 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:37:32.893809 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:37:32.893817 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:37:32.893825 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:37:32.893833 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:37:32.893841 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:37:32.893849 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:37:32.893857 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:37:32.893864 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:37:32.893872 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:37:32.893882 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:37:32.893890 kernel: iommu: Default domain type: Translated Sep 5 00:37:32.893898 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:37:32.893906 kernel: efivars: Registered efivars operations Sep 5 00:37:32.893914 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:37:32.893922 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:37:32.893930 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:37:32.893937 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 5 00:37:32.893945 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 5 00:37:32.893955 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 5 00:37:32.893963 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 5 00:37:32.893970 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 5 00:37:32.893978 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 5 00:37:32.893986 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 5 00:37:32.894107 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:37:32.894228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:37:32.894347 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:37:32.894361 kernel: vgaarb: loaded Sep 5 00:37:32.894369 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:37:32.894385 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:37:32.894393 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:37:32.894401 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:37:32.894410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:37:32.894418 kernel: pnp: PnP ACPI init Sep 5 00:37:32.894576 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 5 00:37:32.894593 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:37:32.894602 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:37:32.894610 kernel: NET: Registered PF_INET protocol family Sep 5 00:37:32.894618 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:37:32.894626 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:37:32.894634 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:37:32.894643 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:37:32.894651 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:37:32.894659 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:37:32.894669 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:37:32.894677 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:37:32.894700 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:37:32.894708 kernel: NET: Registered PF_XDP protocol family Sep 5 00:37:32.894833 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 5 00:37:32.894954 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 5 00:37:32.895065 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:37:32.895174 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:37:32.895289 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:37:32.895409 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 5 00:37:32.895521 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 5 00:37:32.895641 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:37:32.895653 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:37:32.895661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 5 00:37:32.895670 kernel: Initialise system trusted keyrings Sep 5 00:37:32.895695 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:37:32.895704 kernel: Key type asymmetric registered Sep 5 00:37:32.895712 kernel: Asymmetric key parser 'x509' registered Sep 5 00:37:32.895720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:37:32.895729 kernel: io scheduler mq-deadline registered Sep 5 00:37:32.895737 kernel: io scheduler kyber registered Sep 5 00:37:32.895745 kernel: io scheduler bfq registered Sep 5 00:37:32.895756 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:37:32.895765 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:37:32.895773 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:37:32.895782 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:37:32.895790 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:37:32.895798 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:37:32.895807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:37:32.895815 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:37:32.895823 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:37:32.895834 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:37:32.895996 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:37:32.896121 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:37:32.896236 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:37:32 UTC (1757032652) Sep 5 00:37:32.896349 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 5 00:37:32.896359 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:37:32.896367 kernel: efifb: probing for efifb Sep 5 00:37:32.896385 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 5 00:37:32.896397 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 5 00:37:32.896405 kernel: efifb: scrolling: redraw Sep 5 00:37:32.896414 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 00:37:32.896422 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 00:37:32.896431 kernel: fb0: EFI VGA frame buffer device Sep 5 00:37:32.896439 kernel: pstore: Using crash dump compression: deflate Sep 5 00:37:32.896447 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:37:32.896455 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:37:32.896463 kernel: Segment Routing with IPv6 Sep 5 00:37:32.896474 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:37:32.896482 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:37:32.896490 kernel: Key type dns_resolver registered Sep 5 00:37:32.896498 kernel: IPI shorthand broadcast: enabled Sep 5 00:37:32.896507 kernel: sched_clock: Marking stable (3395002564, 165474705)->(3728074048, -167596779) Sep 5 00:37:32.896515 kernel: registered taskstats version 1 Sep 5 00:37:32.896523 kernel: Loading compiled-in X.509 certificates Sep 5 00:37:32.896532 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 46ac630679a94cf97f27908ed9d949b10b130587' Sep 5 00:37:32.896540 kernel: Demotion targets for Node 0: null Sep 5 00:37:32.896550 kernel: Key type .fscrypt registered Sep 5 00:37:32.896558 kernel: Key type fscrypt-provisioning registered Sep 5 00:37:32.896567 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:37:32.896575 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:37:32.896583 kernel: ima: No architecture policies found Sep 5 00:37:32.896591 kernel: clk: Disabling unused clocks Sep 5 00:37:32.896599 kernel: Warning: unable to open an initial console. Sep 5 00:37:32.896608 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 5 00:37:32.896618 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:37:32.896626 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:37:32.896635 kernel: Run /init as init process Sep 5 00:37:32.896643 kernel: with arguments: Sep 5 00:37:32.896651 kernel: /init Sep 5 00:37:32.896659 kernel: with environment: Sep 5 00:37:32.896667 kernel: HOME=/ Sep 5 00:37:32.896675 kernel: TERM=linux Sep 5 00:37:32.896698 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:37:32.896710 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:37:32.896724 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:37:32.896734 systemd[1]: Detected virtualization kvm. Sep 5 00:37:32.896743 systemd[1]: Detected architecture x86-64. Sep 5 00:37:32.896751 systemd[1]: Running in initrd. Sep 5 00:37:32.896760 systemd[1]: No hostname configured, using default hostname. Sep 5 00:37:32.896769 systemd[1]: Hostname set to . Sep 5 00:37:32.896777 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:37:32.896789 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:37:32.896798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:37:32.896807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:37:32.896816 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:37:32.896825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:37:32.896834 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:37:32.896844 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:37:32.896857 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:37:32.896866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:37:32.896874 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:37:32.896883 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:37:32.896892 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:37:32.896900 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:37:32.896909 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:37:32.896918 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:37:32.896929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:37:32.896938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:37:32.896947 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:37:32.896956 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:37:32.896965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:37:32.896974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:37:32.896985 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:37:32.896994 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:37:32.897004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:37:32.897013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:37:32.897022 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:37:32.897031 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:37:32.897040 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:37:32.897051 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:37:32.897060 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:37:32.897071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:32.897080 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:37:32.897092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:37:32.897100 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:37:32.897109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:37:32.897213 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 00:37:32.897237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:32.897246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:37:32.897255 systemd-journald[220]: Journal started Sep 5 00:37:32.897279 systemd-journald[220]: Runtime Journal (/run/log/journal/d3112b23f278435f9444488e445e547e) is 6M, max 48.5M, 42.4M free. Sep 5 00:37:32.885802 systemd-modules-load[223]: Inserted module 'overlay' Sep 5 00:37:32.898981 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:37:32.907802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:37:32.910269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:37:32.911457 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:37:32.918784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:37:32.920569 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 5 00:37:32.921030 kernel: Bridge firewalling registered Sep 5 00:37:32.931900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:37:32.932697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:37:32.936142 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:37:32.938365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:37:32.942414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:37:32.946237 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:37:32.951201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:37:32.955592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:37:32.956255 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8098f8b005e7ec91dd20cd6ed926d3f56a1236d6886e322045b268199230ff25 Sep 5 00:37:32.959373 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:37:33.004067 systemd-resolved[273]: Positive Trust Anchors: Sep 5 00:37:33.004091 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:37:33.004122 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:37:33.006891 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 5 00:37:33.008213 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:37:33.012764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:37:33.064716 kernel: SCSI subsystem initialized Sep 5 00:37:33.073710 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:37:33.084724 kernel: iscsi: registered transport (tcp) Sep 5 00:37:33.105850 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:37:33.105884 kernel: QLogic iSCSI HBA Driver Sep 5 00:37:33.132675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:37:33.152502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:37:33.153948 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:37:33.218867 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:37:33.220811 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:37:33.281710 kernel: raid6: avx2x4 gen() 30520 MB/s Sep 5 00:37:33.298706 kernel: raid6: avx2x2 gen() 30999 MB/s Sep 5 00:37:33.315736 kernel: raid6: avx2x1 gen() 25940 MB/s Sep 5 00:37:33.315755 kernel: raid6: using algorithm avx2x2 gen() 30999 MB/s Sep 5 00:37:33.333742 kernel: raid6: .... xor() 19881 MB/s, rmw enabled Sep 5 00:37:33.333764 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:37:33.354711 kernel: xor: automatically using best checksumming function avx Sep 5 00:37:33.526717 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:37:33.536204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:37:33.538490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:37:33.573049 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 5 00:37:33.579879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:37:33.581047 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:37:33.609484 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 5 00:37:33.640945 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:37:33.644832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:37:33.823713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:37:33.827465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:37:33.868731 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:37:33.871752 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:37:33.880732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:37:33.880757 kernel: GPT:9289727 != 19775487 Sep 5 00:37:33.880767 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:37:33.880778 kernel: GPT:9289727 != 19775487 Sep 5 00:37:33.882508 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:37:33.882560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:37:33.884728 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:37:33.895725 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:37:33.909024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:37:33.909104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:33.916245 kernel: AES CTR mode by8 optimization enabled Sep 5 00:37:33.912440 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:33.914806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:33.920854 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:37:33.927717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:37:33.927875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:33.935717 kernel: libata version 3.00 loaded. Sep 5 00:37:33.938804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:33.965753 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:37:33.967766 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:37:33.970495 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:37:33.970700 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:37:33.970847 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:37:33.977711 kernel: scsi host0: ahci Sep 5 00:37:33.978441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:37:33.981956 kernel: scsi host1: ahci Sep 5 00:37:33.982134 kernel: scsi host2: ahci Sep 5 00:37:33.982282 kernel: scsi host3: ahci Sep 5 00:37:33.983766 kernel: scsi host4: ahci Sep 5 00:37:33.983938 kernel: scsi host5: ahci Sep 5 00:37:33.985230 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 5 00:37:33.985252 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 5 00:37:33.986096 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 5 00:37:33.987856 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 5 00:37:33.987885 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 5 00:37:33.988823 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 5 00:37:33.991354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:37:34.015844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:37:34.024588 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:37:34.025080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:37:34.026677 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:37:34.059937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:34.070666 disk-uuid[632]: Primary Header is updated. Sep 5 00:37:34.070666 disk-uuid[632]: Secondary Entries is updated. Sep 5 00:37:34.070666 disk-uuid[632]: Secondary Header is updated. Sep 5 00:37:34.074713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:37:34.079730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:37:34.346737 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:37:34.346832 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:37:34.346843 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:37:34.346854 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:37:34.347743 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:37:34.348715 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:37:34.349719 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:37:34.349750 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:37:34.350112 kernel: ata3.00: applying bridge limits Sep 5 00:37:34.351252 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:37:34.351265 kernel: ata3.00: configured for UDMA/100 Sep 5 00:37:34.352718 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:37:34.405301 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:37:34.405753 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:37:34.425804 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:37:34.794248 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:37:34.797365 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:37:34.800109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:37:34.802701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:37:34.806267 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:37:34.831391 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:37:35.152722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:37:35.153309 disk-uuid[633]: The operation has completed successfully. Sep 5 00:37:35.185750 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:37:35.185914 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:37:35.229346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:37:35.333607 sh[662]: Success Sep 5 00:37:35.352739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:37:35.352829 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:37:35.352841 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:37:35.362739 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:37:35.396604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:37:35.433892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:37:35.461255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:37:35.468719 kernel: BTRFS: device fsid 576be3ac-7582-49ed-82f8-99c78beeeda2 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (674) Sep 5 00:37:35.468750 kernel: BTRFS info (device dm-0): first mount of filesystem 576be3ac-7582-49ed-82f8-99c78beeeda2 Sep 5 00:37:35.470145 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:37:35.475717 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:37:35.475736 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:37:35.477010 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:37:35.477810 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:37:35.479198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:37:35.480186 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:37:35.482848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:37:35.522633 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 5 00:37:35.522676 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:37:35.522710 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:37:35.526724 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:37:35.526758 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:37:35.531722 kernel: BTRFS info (device vda6): last unmount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:37:35.533186 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:37:35.534711 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:37:35.643283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:37:35.646031 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:37:35.785734 ignition[764]: Ignition 2.21.0 Sep 5 00:37:35.786346 ignition[764]: Stage: fetch-offline Sep 5 00:37:35.786397 ignition[764]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:35.786407 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:35.786518 ignition[764]: parsed url from cmdline: "" Sep 5 00:37:35.786522 ignition[764]: no config URL provided Sep 5 00:37:35.786527 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:37:35.786536 ignition[764]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:37:35.786566 ignition[764]: op(1): [started] loading QEMU firmware config module Sep 5 00:37:35.786571 ignition[764]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:37:35.792821 systemd-networkd[844]: lo: Link UP Sep 5 00:37:35.792825 systemd-networkd[844]: lo: Gained carrier Sep 5 00:37:35.794235 ignition[764]: op(1): [finished] loading QEMU firmware config module Sep 5 00:37:35.794468 systemd-networkd[844]: Enumeration completed Sep 5 00:37:35.795105 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:37:35.795836 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:35.795841 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:37:35.797503 systemd[1]: Reached target network.target - Network. Sep 5 00:37:35.798650 systemd-networkd[844]: eth0: Link UP Sep 5 00:37:35.798846 systemd-networkd[844]: eth0: Gained carrier Sep 5 00:37:35.798855 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:35.823792 systemd-networkd[844]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:37:35.846734 ignition[764]: parsing config with SHA512: 60e47b6d7248753d86161029ab0bf1bd7fe6d2060a24945594128639ea9c6ec3790fae4cfe8311e65d4be158c26cbfedea099e13b4862a3cea6525ac4508e9c5 Sep 5 00:37:35.852606 systemd-resolved[273]: Detected conflict on linux IN A 10.0.0.129 Sep 5 00:37:35.852624 systemd-resolved[273]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 5 00:37:35.854879 unknown[764]: fetched base config from "system" Sep 5 00:37:35.855433 ignition[764]: fetch-offline: fetch-offline passed Sep 5 00:37:35.854889 unknown[764]: fetched user config from "qemu" Sep 5 00:37:35.855513 ignition[764]: Ignition finished successfully Sep 5 00:37:35.859164 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:37:35.861769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:37:35.862639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:37:35.908157 ignition[857]: Ignition 2.21.0 Sep 5 00:37:35.908174 ignition[857]: Stage: kargs Sep 5 00:37:35.908383 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:35.908395 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:35.913964 ignition[857]: kargs: kargs passed Sep 5 00:37:35.914063 ignition[857]: Ignition finished successfully Sep 5 00:37:35.920334 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:37:35.923394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:37:35.964505 ignition[865]: Ignition 2.21.0 Sep 5 00:37:35.964520 ignition[865]: Stage: disks Sep 5 00:37:35.964732 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:35.964744 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:35.967745 ignition[865]: disks: disks passed Sep 5 00:37:35.967841 ignition[865]: Ignition finished successfully Sep 5 00:37:35.972484 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:37:35.974637 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:37:35.975771 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:37:35.977900 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:37:35.980133 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:37:35.981960 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:37:35.984835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:37:36.011540 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:37:36.028094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:37:36.031331 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:37:36.145710 kernel: EXT4-fs (vda9): mounted filesystem b20472b4-8182-496c-8475-ee073ab90b5c r/w with ordered data mode. Quota mode: none. Sep 5 00:37:36.146135 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:37:36.147157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:37:36.149924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:37:36.151212 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:37:36.152128 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:37:36.152168 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:37:36.152192 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:37:36.166379 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:37:36.169364 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:37:36.172403 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 5 00:37:36.174705 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:37:36.174750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:37:36.177714 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:37:36.177741 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:37:36.179716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:37:36.215368 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:37:36.219398 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:37:36.224217 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:37:36.228627 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:37:36.432828 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:37:36.435091 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:37:36.436958 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:37:36.457719 kernel: BTRFS info (device vda6): last unmount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:37:36.467819 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:37:36.471463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:37:36.517672 ignition[998]: INFO : Ignition 2.21.0 Sep 5 00:37:36.517672 ignition[998]: INFO : Stage: mount Sep 5 00:37:36.519558 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:36.519558 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:36.523151 ignition[998]: INFO : mount: mount passed Sep 5 00:37:36.523881 ignition[998]: INFO : Ignition finished successfully Sep 5 00:37:36.527097 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:37:36.529192 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:37:36.569082 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:37:36.610648 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 5 00:37:36.610721 kernel: BTRFS info (device vda6): first mount of filesystem 2861b466-0188-457c-9fd5-d64bb65bd98a Sep 5 00:37:36.610740 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:37:36.615044 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:37:36.615073 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:37:36.617508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:37:36.664260 ignition[1027]: INFO : Ignition 2.21.0 Sep 5 00:37:36.664260 ignition[1027]: INFO : Stage: files Sep 5 00:37:36.666468 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:36.666468 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:36.668862 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:37:36.668862 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:37:36.668862 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:37:36.672917 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:37:36.672917 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:37:36.672917 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:37:36.671163 unknown[1027]: wrote ssh authorized keys file for user: core Sep 5 00:37:36.678004 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:37:36.678004 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 5 00:37:36.715451 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:37:36.934391 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:37:36.934391 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:37:36.938719 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:37:37.125302 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:37:37.396381 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:37:37.396381 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:37:37.400449 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:37:37.413170 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 5 00:37:37.802943 systemd-networkd[844]: eth0: Gained IPv6LL Sep 5 00:37:37.891761 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:37:38.789794 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:37:38.789794 ignition[1027]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:37:38.794288 ignition[1027]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:37:38.802001 ignition[1027]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:37:38.802001 ignition[1027]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:37:38.802001 ignition[1027]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 00:37:38.807520 ignition[1027]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:37:38.807520 ignition[1027]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:37:38.807520 ignition[1027]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 00:37:38.807520 ignition[1027]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:37:38.825206 ignition[1027]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:37:38.832641 ignition[1027]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:37:38.834415 ignition[1027]: INFO : files: files passed Sep 5 00:37:38.834415 ignition[1027]: INFO : Ignition finished successfully Sep 5 00:37:38.844092 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:37:38.847180 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:37:38.849455 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:37:38.865488 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:37:38.865624 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:37:38.870354 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:37:38.874200 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:38.874200 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:38.878318 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:38.881480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:37:38.882525 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:37:38.885306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:37:38.959435 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:37:38.959585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:37:38.960353 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:37:38.964912 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:37:38.965565 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:37:38.966832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:37:38.985161 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:37:38.989190 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:37:39.012335 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:37:39.013145 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:37:39.013553 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:37:39.014098 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:37:39.014298 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:37:39.021054 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:37:39.021565 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:37:39.022114 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:37:39.026255 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:37:39.026614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:37:39.027174 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:37:39.027559 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:37:39.028112 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:37:39.037424 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:37:39.037786 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:37:39.038327 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:37:39.038673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:37:39.038833 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:37:39.045677 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:37:39.046199 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:37:39.046493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:37:39.046612 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:37:39.051991 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:37:39.052121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:37:39.055607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:37:39.055752 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:37:39.056312 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:37:39.059044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:37:39.059195 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:37:39.060665 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:37:39.061148 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:37:39.061471 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:37:39.061568 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:37:39.066064 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:37:39.066181 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:37:39.068343 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:37:39.068463 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:37:39.069987 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:37:39.070093 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:37:39.073111 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:37:39.077473 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:37:39.078181 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:37:39.078328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:37:39.080236 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:37:39.080360 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:37:39.088671 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:37:39.089178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:37:39.111833 ignition[1082]: INFO : Ignition 2.21.0 Sep 5 00:37:39.111833 ignition[1082]: INFO : Stage: umount Sep 5 00:37:39.113763 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:39.113763 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:39.115938 ignition[1082]: INFO : umount: umount passed Sep 5 00:37:39.115938 ignition[1082]: INFO : Ignition finished successfully Sep 5 00:37:39.114937 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:37:39.118495 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:37:39.118677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:37:39.119514 systemd[1]: Stopped target network.target - Network. Sep 5 00:37:39.121098 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:37:39.121166 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:37:39.121447 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:37:39.121493 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:37:39.121912 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:37:39.121967 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:37:39.122225 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:37:39.122271 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:37:39.122863 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:37:39.123361 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:37:39.140168 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:37:39.140338 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:37:39.144645 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:37:39.144985 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:37:39.145039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:37:39.150085 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:37:39.150404 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:37:39.150535 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:37:39.154623 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:37:39.155174 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:37:39.156372 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:37:39.156422 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:37:39.158043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:37:39.160624 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:37:39.160694 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:37:39.163329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:37:39.163379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:37:39.170122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:37:39.170201 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:37:39.170617 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:37:39.172124 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:37:39.192556 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:37:39.193619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:37:39.196598 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:37:39.196759 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:37:39.199428 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:37:39.199516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:37:39.199903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:37:39.199941 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:37:39.200206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:37:39.200272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:37:39.200985 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:37:39.201037 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:37:39.201657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:37:39.201723 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:37:39.212413 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:37:39.212716 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:37:39.212777 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:37:39.217808 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:37:39.217861 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:37:39.221827 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:37:39.221921 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:37:39.225836 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:37:39.226017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:37:39.227518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:37:39.227605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:39.245955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:37:39.246285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:37:39.326494 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:37:39.326656 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:37:39.327734 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:37:39.331282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:37:39.331352 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:37:39.334453 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:37:39.364957 systemd[1]: Switching root. Sep 5 00:37:39.408263 systemd-journald[220]: Journal stopped Sep 5 00:37:40.906320 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 00:37:40.906395 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:37:40.906409 kernel: SELinux: policy capability open_perms=1 Sep 5 00:37:40.906421 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:37:40.906432 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:37:40.906443 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:37:40.906455 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:37:40.906466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:37:40.906482 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:37:40.906493 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:37:40.906512 kernel: audit: type=1403 audit(1757032659.988:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:37:40.906525 systemd[1]: Successfully loaded SELinux policy in 52.179ms. Sep 5 00:37:40.906545 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.220ms. Sep 5 00:37:40.906558 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:37:40.906571 systemd[1]: Detected virtualization kvm. Sep 5 00:37:40.906583 systemd[1]: Detected architecture x86-64. Sep 5 00:37:40.906600 systemd[1]: Detected first boot. Sep 5 00:37:40.906612 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:37:40.906624 kernel: Guest personality initialized and is inactive Sep 5 00:37:40.906639 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:37:40.906654 kernel: Initialized host personality Sep 5 00:37:40.906668 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:37:40.906758 zram_generator::config[1130]: No configuration found. Sep 5 00:37:40.906780 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:37:40.906797 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:37:40.906812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:37:40.906827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:37:40.906847 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:37:40.906863 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:37:40.906880 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:37:40.906895 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:37:40.906910 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:37:40.906927 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:37:40.906943 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:37:40.906955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:37:40.906973 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:37:40.906985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:37:40.907000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:37:40.907019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:37:40.907039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:37:40.907054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:37:40.907072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:37:40.907088 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:37:40.907106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:37:40.907119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:37:40.907131 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:37:40.907143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:37:40.907156 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:37:40.907177 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:37:40.907190 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:37:40.907207 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:37:40.907223 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:37:40.907242 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:37:40.907258 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:37:40.907274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:37:40.907291 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:37:40.907314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:37:40.907326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:37:40.907338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:37:40.907350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:37:40.907362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:37:40.907377 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:37:40.907389 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:37:40.907402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:40.907414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:37:40.907426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:37:40.907438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:37:40.907451 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:37:40.907463 systemd[1]: Reached target machines.target - Containers. Sep 5 00:37:40.907481 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:37:40.907500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:40.907517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:37:40.907533 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:37:40.907547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:40.907559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:37:40.907571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:40.907583 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:37:40.907597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:40.907612 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:37:40.907624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:37:40.907636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:37:40.907648 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:37:40.907660 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:37:40.907673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:40.907700 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:37:40.907713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:37:40.907725 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:37:40.907740 kernel: loop: module loaded Sep 5 00:37:40.907751 kernel: fuse: init (API version 7.41) Sep 5 00:37:40.907763 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:37:40.907776 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:37:40.907790 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:37:40.907803 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:37:40.907815 systemd[1]: Stopped verity-setup.service. Sep 5 00:37:40.907827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:40.907840 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:37:40.907855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:37:40.907869 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:37:40.907881 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:37:40.907893 kernel: ACPI: bus type drm_connector registered Sep 5 00:37:40.907907 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:37:40.907920 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:37:40.907932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:37:40.907944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:37:40.907957 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:37:40.907969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:40.907983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:40.907996 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:37:40.908009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:37:40.908050 systemd-journald[1197]: Collecting audit messages is disabled. Sep 5 00:37:40.908075 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:37:40.908093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:40.908109 systemd-journald[1197]: Journal started Sep 5 00:37:40.908136 systemd-journald[1197]: Runtime Journal (/run/log/journal/d3112b23f278435f9444488e445e547e) is 6M, max 48.5M, 42.4M free. Sep 5 00:37:40.584045 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:37:40.604864 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:37:40.605397 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:37:40.909288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:40.912298 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:37:40.913315 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:37:40.913547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:37:40.914931 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:40.915227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:40.916644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:37:40.918191 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:37:40.919765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:37:40.921520 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:37:40.936448 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:37:40.939629 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:37:40.942455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:37:40.943611 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:37:40.943743 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:37:40.946936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:37:40.962626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:37:40.963998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:40.966564 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:37:40.970528 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:37:40.971871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:37:40.974824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:37:40.976006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:37:40.977766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:37:40.982826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:37:41.078267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:37:41.082922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:37:41.086017 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:37:41.087374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:37:41.093329 systemd-journald[1197]: Time spent on flushing to /var/log/journal/d3112b23f278435f9444488e445e547e is 17.343ms for 1079 entries. Sep 5 00:37:41.093329 systemd-journald[1197]: System Journal (/var/log/journal/d3112b23f278435f9444488e445e547e) is 8M, max 195.6M, 187.6M free. Sep 5 00:37:41.133473 systemd-journald[1197]: Received client request to flush runtime journal. Sep 5 00:37:41.133530 kernel: loop0: detected capacity change from 0 to 221472 Sep 5 00:37:41.101027 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:37:41.103316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:37:41.109287 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:37:41.112963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:37:41.136900 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:37:41.140153 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:37:41.144615 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 5 00:37:41.145040 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 5 00:37:41.153634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:37:41.157676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:37:41.172034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:37:41.173709 kernel: loop1: detected capacity change from 0 to 113872 Sep 5 00:37:41.310730 kernel: loop2: detected capacity change from 0 to 146240 Sep 5 00:37:41.340247 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:37:41.343620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:37:41.346706 kernel: loop3: detected capacity change from 0 to 221472 Sep 5 00:37:41.358838 kernel: loop4: detected capacity change from 0 to 113872 Sep 5 00:37:41.449481 kernel: loop5: detected capacity change from 0 to 146240 Sep 5 00:37:41.469715 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:37:41.469738 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 5 00:37:41.477467 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:37:41.477831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:37:41.478617 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 5 00:37:41.483806 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:37:41.483825 systemd[1]: Reloading... Sep 5 00:37:41.572717 zram_generator::config[1300]: No configuration found. Sep 5 00:37:41.728131 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:37:41.741516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:37:41.829376 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:37:41.829494 systemd[1]: Reloading finished in 345 ms. Sep 5 00:37:41.851546 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:37:41.853269 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:37:41.876591 systemd[1]: Starting ensure-sysext.service... Sep 5 00:37:41.878811 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:37:41.889469 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:37:41.889489 systemd[1]: Reloading... Sep 5 00:37:41.911503 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:37:41.911652 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:37:41.912203 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:37:41.912474 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:37:41.913397 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:37:41.913735 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 5 00:37:41.913807 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 5 00:37:41.918946 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:37:41.919056 systemd-tmpfiles[1339]: Skipping /boot Sep 5 00:37:41.970542 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:37:41.971919 systemd-tmpfiles[1339]: Skipping /boot Sep 5 00:37:41.993779 zram_generator::config[1372]: No configuration found. Sep 5 00:37:42.092450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:37:42.175871 systemd[1]: Reloading finished in 285 ms. Sep 5 00:37:42.200626 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:37:42.217312 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:37:42.226270 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:37:42.228858 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:37:42.231298 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:37:42.239716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:37:42.242305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:37:42.246117 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:37:42.249950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.250275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:42.257761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:42.272590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:42.279019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:42.280272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:42.280384 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:42.284106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:37:42.285286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.287174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:37:42.289056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:42.289562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:42.293669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:42.293937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:42.296537 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:42.296888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:42.311033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.311509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:42.313462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:42.316172 augenrules[1438]: No rules Sep 5 00:37:42.316967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:42.326764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:42.327881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:42.327992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:42.329411 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:37:42.330463 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.332358 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:37:42.332713 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:37:42.334427 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:37:42.336450 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:37:42.338888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:42.339216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:42.341213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:42.341484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:42.343221 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:42.343526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:42.347837 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 5 00:37:42.348945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:37:42.354882 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:37:42.361314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.365038 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:37:42.366256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:42.369915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:42.373655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:37:42.375985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:42.380021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:42.381932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:42.382053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:42.382205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:37:42.382294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:42.385444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:37:42.396906 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:37:42.403085 augenrules[1458]: /sbin/augenrules: No change Sep 5 00:37:42.409263 systemd[1]: Finished ensure-sysext.service. Sep 5 00:37:42.417715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:42.418110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:42.419975 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:37:42.420614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:37:42.429704 augenrules[1508]: No rules Sep 5 00:37:42.430820 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:37:42.432080 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:37:42.433627 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:42.434001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:42.439429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:42.439814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:42.451878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:37:42.451947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:37:42.457155 systemd-resolved[1408]: Positive Trust Anchors: Sep 5 00:37:42.457180 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:37:42.457215 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:37:42.457830 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:37:42.462652 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 5 00:37:42.464656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:37:42.466175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:37:42.484604 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:37:42.551911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:37:42.554598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:37:42.569710 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:37:42.580428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:37:42.580702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:37:42.598754 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:37:42.601290 systemd-networkd[1486]: lo: Link UP Sep 5 00:37:42.601304 systemd-networkd[1486]: lo: Gained carrier Sep 5 00:37:42.603155 systemd-networkd[1486]: Enumeration completed Sep 5 00:37:42.603252 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:37:42.604555 systemd[1]: Reached target network.target - Network. Sep 5 00:37:42.607160 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:37:42.607946 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:42.607952 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:37:42.609708 systemd-networkd[1486]: eth0: Link UP Sep 5 00:37:42.609902 systemd-networkd[1486]: eth0: Gained carrier Sep 5 00:37:42.609926 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:42.610935 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:37:42.626772 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:37:42.627069 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:37:42.629747 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:37:42.632748 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:37:42.652234 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:37:42.659943 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:37:43.139316 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 5 00:37:43.139436 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:37:43.139796 systemd-timesyncd[1519]: Initial clock synchronization to Fri 2025-09-05 00:37:43.139266 UTC. Sep 5 00:37:43.140289 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:37:43.141492 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:37:43.142794 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:37:43.144071 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:37:43.145267 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:37:43.146486 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:37:43.146516 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:37:43.147441 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:37:43.148694 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:37:43.149850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:37:43.151070 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:37:43.153286 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:37:43.159294 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:37:43.162832 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:37:43.164306 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:37:43.165532 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:37:43.176293 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:37:43.178805 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:37:43.181016 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:37:43.193969 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:37:43.197325 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:37:43.198424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:37:43.198524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:37:43.201319 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:37:43.291984 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:37:43.300652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:37:43.305406 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:37:43.310011 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:37:43.311040 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:37:43.316324 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:37:43.320850 jq[1558]: false Sep 5 00:37:43.321409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:37:43.325325 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:37:43.330759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:37:43.332650 extend-filesystems[1559]: Found /dev/vda6 Sep 5 00:37:43.337608 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:37:43.344259 extend-filesystems[1559]: Found /dev/vda9 Sep 5 00:37:43.345608 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 5 00:37:43.346153 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 5 00:37:43.345199 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:37:43.347353 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:37:43.347962 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:37:43.350541 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:37:43.356238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:37:43.364378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:37:43.366868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:37:43.367245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:37:43.371440 jq[1574]: true Sep 5 00:37:43.368778 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:37:43.370238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:37:43.378558 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:37:43.379201 extend-filesystems[1559]: Checking size of /dev/vda9 Sep 5 00:37:43.378833 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:37:43.391087 update_engine[1571]: I20250905 00:37:43.383363 1571 main.cc:92] Flatcar Update Engine starting Sep 5 00:37:43.381345 oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 5 00:37:43.391771 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 5 00:37:43.391771 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:37:43.391771 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 5 00:37:43.391771 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 5 00:37:43.391771 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:37:43.381370 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:37:43.381437 oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 5 00:37:43.388536 oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 5 00:37:43.388551 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:37:43.401968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:43.404508 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:37:43.406672 extend-filesystems[1559]: Resized partition /dev/vda9 Sep 5 00:37:43.404778 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:37:43.413572 extend-filesystems[1596]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:37:43.414906 jq[1580]: true Sep 5 00:37:43.415484 tar[1579]: linux-amd64/helm Sep 5 00:37:43.416441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:37:43.416730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:43.419064 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:37:43.424537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:43.434143 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:37:43.441839 dbus-daemon[1550]: [system] SELinux support is enabled Sep 5 00:37:43.442064 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:37:43.445994 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:37:43.447476 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:37:43.449186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:37:43.449428 update_engine[1571]: I20250905 00:37:43.449366 1571 update_check_scheduler.cc:74] Next update check in 7m33s Sep 5 00:37:43.449409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:37:43.453530 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:37:43.461327 kernel: kvm_amd: TSC scaling supported Sep 5 00:37:43.461386 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:37:43.461422 kernel: kvm_amd: Nested Paging enabled Sep 5 00:37:43.461435 kernel: kvm_amd: LBR virtualization supported Sep 5 00:37:43.462552 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:37:43.462574 kernel: kvm_amd: Virtual GIF supported Sep 5 00:37:43.463718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:37:43.570192 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:37:43.599074 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:37:43.599074 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:37:43.599074 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:37:43.600002 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Sep 5 00:37:43.600971 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:37:43.602868 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:37:43.604500 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:37:43.604848 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:37:43.706352 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:37:43.706379 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:37:43.706646 systemd-logind[1570]: New seat seat0. Sep 5 00:37:43.717578 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:37:43.723468 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:37:43.725325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:43.733035 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:37:43.743182 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:37:43.768145 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:37:43.801133 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:37:43.804445 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:37:43.830810 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:37:43.831133 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:37:43.834838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:37:43.903596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:37:43.908388 containerd[1597]: time="2025-09-05T00:37:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:37:43.909211 containerd[1597]: time="2025-09-05T00:37:43.909181210Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 5 00:37:43.909408 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:37:43.912559 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:37:43.914402 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:37:43.924517 containerd[1597]: time="2025-09-05T00:37:43.924484323Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.562µs" Sep 5 00:37:43.924517 containerd[1597]: time="2025-09-05T00:37:43.924513869Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:37:43.924590 containerd[1597]: time="2025-09-05T00:37:43.924533305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:37:43.924747 containerd[1597]: time="2025-09-05T00:37:43.924727860Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:37:43.924773 containerd[1597]: time="2025-09-05T00:37:43.924749771Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:37:43.924791 containerd[1597]: time="2025-09-05T00:37:43.924781641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:37:43.924873 containerd[1597]: time="2025-09-05T00:37:43.924854678Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:37:43.924873 containerd[1597]: time="2025-09-05T00:37:43.924870177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925342 containerd[1597]: time="2025-09-05T00:37:43.925199755Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925342 containerd[1597]: time="2025-09-05T00:37:43.925219181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925342 containerd[1597]: time="2025-09-05T00:37:43.925229240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925342 containerd[1597]: time="2025-09-05T00:37:43.925236684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925438 containerd[1597]: time="2025-09-05T00:37:43.925365225Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925641 containerd[1597]: time="2025-09-05T00:37:43.925619031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925672 containerd[1597]: time="2025-09-05T00:37:43.925651993Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:37:43.925672 containerd[1597]: time="2025-09-05T00:37:43.925660819Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:37:43.925723 containerd[1597]: time="2025-09-05T00:37:43.925699562Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:37:43.925914 containerd[1597]: time="2025-09-05T00:37:43.925885952Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:37:43.925975 containerd[1597]: time="2025-09-05T00:37:43.925958838Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:37:43.933910 containerd[1597]: time="2025-09-05T00:37:43.933876043Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:37:43.933953 containerd[1597]: time="2025-09-05T00:37:43.933919404Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:37:43.933953 containerd[1597]: time="2025-09-05T00:37:43.933932469Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:37:43.933953 containerd[1597]: time="2025-09-05T00:37:43.933943069Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.933953027Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.933962004Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.933984617Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.933998032Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.934007620Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:37:43.934019 containerd[1597]: time="2025-09-05T00:37:43.934016817Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:37:43.934127 containerd[1597]: time="2025-09-05T00:37:43.934024832Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:37:43.934127 containerd[1597]: time="2025-09-05T00:37:43.934035292Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:37:43.934247 containerd[1597]: time="2025-09-05T00:37:43.934180193Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:37:43.934247 containerd[1597]: time="2025-09-05T00:37:43.934210169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:37:43.934247 containerd[1597]: time="2025-09-05T00:37:43.934223024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:37:43.934247 containerd[1597]: time="2025-09-05T00:37:43.934234665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:37:43.934247 containerd[1597]: time="2025-09-05T00:37:43.934244173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934253801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934263099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934274911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934285090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934294518Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:37:43.934347 containerd[1597]: time="2025-09-05T00:37:43.934316930Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:37:43.934538 containerd[1597]: time="2025-09-05T00:37:43.934379557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:37:43.934538 containerd[1597]: time="2025-09-05T00:37:43.934395828Z" level=info msg="Start snapshots syncer" Sep 5 00:37:43.934538 containerd[1597]: time="2025-09-05T00:37:43.934427036Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:37:43.934696 containerd[1597]: time="2025-09-05T00:37:43.934644594Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:37:43.934854 containerd[1597]: time="2025-09-05T00:37:43.934705979Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:37:43.934854 containerd[1597]: time="2025-09-05T00:37:43.934787091Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:37:43.934943 containerd[1597]: time="2025-09-05T00:37:43.934893621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:37:43.934943 containerd[1597]: time="2025-09-05T00:37:43.934914941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:37:43.934943 containerd[1597]: time="2025-09-05T00:37:43.934925681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:37:43.934943 containerd[1597]: time="2025-09-05T00:37:43.934935159Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.934946069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.934955397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.934965345Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.934989050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.934999159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935029035Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935059752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935070743Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935078387Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935088256Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:37:43.935103 containerd[1597]: time="2025-09-05T00:37:43.935095329Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935115166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935143249Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935184506Z" level=info msg="runtime interface created" Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935190838Z" level=info msg="created NRI interface" Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935208712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935220404Z" level=info msg="Connect containerd service" Sep 5 00:37:43.935500 containerd[1597]: time="2025-09-05T00:37:43.935240822Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:37:43.936046 containerd[1597]: time="2025-09-05T00:37:43.936011747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134540709Z" level=info msg="Start subscribing containerd event" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134670202Z" level=info msg="Start recovering state" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134834650Z" level=info msg="Start event monitor" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134862813Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134872101Z" level=info msg="Start streaming server" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134897047Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134901736Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.135030678Z" level=info msg="runtime interface starting up..." Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.135042750Z" level=info msg="starting plugins..." Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.134962841Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.135078397Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:37:44.136184 containerd[1597]: time="2025-09-05T00:37:44.135321643Z" level=info msg="containerd successfully booted in 0.227658s" Sep 5 00:37:44.135501 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:37:44.189639 tar[1579]: linux-amd64/LICENSE Sep 5 00:37:44.189704 tar[1579]: linux-amd64/README.md Sep 5 00:37:44.216684 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:37:44.873486 systemd-networkd[1486]: eth0: Gained IPv6LL Sep 5 00:37:44.876966 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:37:44.879220 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:37:44.882547 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:37:44.885407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:44.900752 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:37:44.927668 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:37:44.929432 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:37:44.929770 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:37:44.932341 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:37:46.098696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:46.100469 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:37:46.101919 systemd[1]: Startup finished in 3.458s (kernel) + 7.353s (initrd) + 5.686s (userspace) = 16.498s. Sep 5 00:37:46.104724 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:46.369752 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:37:46.371063 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:46292.service - OpenSSH per-connection server daemon (10.0.0.1:46292). Sep 5 00:37:46.459526 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 46292 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:46.461133 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:46.468459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:37:46.469647 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:37:46.476177 systemd-logind[1570]: New session 1 of user core. Sep 5 00:37:46.502268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:37:46.506222 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:37:46.527897 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:37:46.530937 systemd-logind[1570]: New session c1 of user core. Sep 5 00:37:46.745090 systemd[1716]: Queued start job for default target default.target. Sep 5 00:37:46.782561 systemd[1716]: Created slice app.slice - User Application Slice. Sep 5 00:37:46.782590 systemd[1716]: Reached target paths.target - Paths. Sep 5 00:37:46.782636 systemd[1716]: Reached target timers.target - Timers. Sep 5 00:37:46.784271 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:37:46.797995 kubelet[1700]: E0905 00:37:46.797921 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:46.799954 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:37:46.800105 systemd[1716]: Reached target sockets.target - Sockets. Sep 5 00:37:46.800179 systemd[1716]: Reached target basic.target - Basic System. Sep 5 00:37:46.800225 systemd[1716]: Reached target default.target - Main User Target. Sep 5 00:37:46.800266 systemd[1716]: Startup finished in 258ms. Sep 5 00:37:46.800614 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:37:46.801754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:46.801966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:46.802321 systemd[1]: kubelet.service: Consumed 1.749s CPU time, 266.3M memory peak. Sep 5 00:37:46.810546 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:37:46.877622 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Sep 5 00:37:46.923869 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:46.925630 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:46.930536 systemd-logind[1570]: New session 2 of user core. Sep 5 00:37:46.945370 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:37:47.001581 sshd[1730]: Connection closed by 10.0.0.1 port 46298 Sep 5 00:37:47.002086 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:47.018000 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:46298.service: Deactivated successfully. Sep 5 00:37:47.020693 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:37:47.021712 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:37:47.025950 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:46312.service - OpenSSH per-connection server daemon (10.0.0.1:46312). Sep 5 00:37:47.026882 systemd-logind[1570]: Removed session 2. Sep 5 00:37:47.083031 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 46312 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:47.084840 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:47.090141 systemd-logind[1570]: New session 3 of user core. Sep 5 00:37:47.100365 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:37:47.150668 sshd[1738]: Connection closed by 10.0.0.1 port 46312 Sep 5 00:37:47.151196 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:47.170388 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:46312.service: Deactivated successfully. Sep 5 00:37:47.172325 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:37:47.173066 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:37:47.176208 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:46320.service - OpenSSH per-connection server daemon (10.0.0.1:46320). Sep 5 00:37:47.176849 systemd-logind[1570]: Removed session 3. Sep 5 00:37:47.232294 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 46320 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:47.233893 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:47.238851 systemd-logind[1570]: New session 4 of user core. Sep 5 00:37:47.245287 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:37:47.299457 sshd[1746]: Connection closed by 10.0.0.1 port 46320 Sep 5 00:37:47.299772 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:47.313827 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:46320.service: Deactivated successfully. Sep 5 00:37:47.315668 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:37:47.316482 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:37:47.319503 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:46324.service - OpenSSH per-connection server daemon (10.0.0.1:46324). Sep 5 00:37:47.320014 systemd-logind[1570]: Removed session 4. Sep 5 00:37:47.387739 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 46324 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:47.389221 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:47.394058 systemd-logind[1570]: New session 5 of user core. Sep 5 00:37:47.407291 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:37:47.468228 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:37:47.468555 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:47.489268 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:47.490752 sshd[1754]: Connection closed by 10.0.0.1 port 46324 Sep 5 00:37:47.491298 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:47.503989 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:46324.service: Deactivated successfully. Sep 5 00:37:47.505604 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:37:47.506336 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:37:47.509030 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:46338.service - OpenSSH per-connection server daemon (10.0.0.1:46338). Sep 5 00:37:47.509600 systemd-logind[1570]: Removed session 5. Sep 5 00:37:47.565770 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 46338 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:47.567989 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:47.575136 systemd-logind[1570]: New session 6 of user core. Sep 5 00:37:47.589508 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:37:47.645956 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:37:47.646299 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:47.652888 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:47.659896 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:37:47.660240 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:47.670671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:37:47.718759 augenrules[1787]: No rules Sep 5 00:37:47.720342 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:37:47.720648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:37:47.721929 sudo[1764]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:47.723695 sshd[1763]: Connection closed by 10.0.0.1 port 46338 Sep 5 00:37:47.724145 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:47.737249 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:46338.service: Deactivated successfully. Sep 5 00:37:47.739265 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:37:47.740055 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:37:47.743058 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:46348.service - OpenSSH per-connection server daemon (10.0.0.1:46348). Sep 5 00:37:47.743873 systemd-logind[1570]: Removed session 6. Sep 5 00:37:47.804590 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 46348 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:37:47.805855 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:47.811688 systemd-logind[1570]: New session 7 of user core. Sep 5 00:37:47.823381 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:37:47.877640 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:37:47.877947 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:48.477611 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:37:48.490585 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:37:49.174302 dockerd[1820]: time="2025-09-05T00:37:49.174195410Z" level=info msg="Starting up" Sep 5 00:37:49.175577 dockerd[1820]: time="2025-09-05T00:37:49.175533139Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:37:49.779853 dockerd[1820]: time="2025-09-05T00:37:49.779795220Z" level=info msg="Loading containers: start." Sep 5 00:37:49.790177 kernel: Initializing XFRM netlink socket Sep 5 00:37:50.137461 systemd-networkd[1486]: docker0: Link UP Sep 5 00:37:50.181187 dockerd[1820]: time="2025-09-05T00:37:50.181127327Z" level=info msg="Loading containers: done." Sep 5 00:37:50.199373 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2647040692-merged.mount: Deactivated successfully. Sep 5 00:37:50.199904 dockerd[1820]: time="2025-09-05T00:37:50.199848780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:37:50.199970 dockerd[1820]: time="2025-09-05T00:37:50.199957915Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 5 00:37:50.200123 dockerd[1820]: time="2025-09-05T00:37:50.200104480Z" level=info msg="Initializing buildkit" Sep 5 00:37:50.231598 dockerd[1820]: time="2025-09-05T00:37:50.231555711Z" level=info msg="Completed buildkit initialization" Sep 5 00:37:50.236190 dockerd[1820]: time="2025-09-05T00:37:50.236140577Z" level=info msg="Daemon has completed initialization" Sep 5 00:37:50.236827 dockerd[1820]: time="2025-09-05T00:37:50.236226208Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:37:50.236447 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:37:51.215070 containerd[1597]: time="2025-09-05T00:37:51.214998095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:37:52.184674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317839023.mount: Deactivated successfully. Sep 5 00:37:53.510669 containerd[1597]: time="2025-09-05T00:37:53.510582465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:53.511244 containerd[1597]: time="2025-09-05T00:37:53.511202598Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 5 00:37:53.512503 containerd[1597]: time="2025-09-05T00:37:53.512455757Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:53.515208 containerd[1597]: time="2025-09-05T00:37:53.515124481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:53.618730 containerd[1597]: time="2025-09-05T00:37:53.618626678Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.403548452s" Sep 5 00:37:53.618730 containerd[1597]: time="2025-09-05T00:37:53.618713551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 5 00:37:53.620110 containerd[1597]: time="2025-09-05T00:37:53.620072990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:37:55.208946 containerd[1597]: time="2025-09-05T00:37:55.208874903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.209838 containerd[1597]: time="2025-09-05T00:37:55.209792293Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 5 00:37:55.211115 containerd[1597]: time="2025-09-05T00:37:55.211075218Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.216284 containerd[1597]: time="2025-09-05T00:37:55.216218793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:55.217027 containerd[1597]: time="2025-09-05T00:37:55.216987304Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.596871223s" Sep 5 00:37:55.217078 containerd[1597]: time="2025-09-05T00:37:55.217032719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 5 00:37:55.217688 containerd[1597]: time="2025-09-05T00:37:55.217655867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:37:56.482739 containerd[1597]: time="2025-09-05T00:37:56.482643228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.483825 containerd[1597]: time="2025-09-05T00:37:56.483773628Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 5 00:37:56.484942 containerd[1597]: time="2025-09-05T00:37:56.484916260Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.487538 containerd[1597]: time="2025-09-05T00:37:56.487504072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:56.488369 containerd[1597]: time="2025-09-05T00:37:56.488330291Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.270637936s" Sep 5 00:37:56.488369 containerd[1597]: time="2025-09-05T00:37:56.488366890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 5 00:37:56.488952 containerd[1597]: time="2025-09-05T00:37:56.488899098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:37:56.831506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:37:56.833402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:57.087212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:57.092836 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:57.170201 kubelet[2100]: E0905 00:37:57.170086 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:57.177248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:57.177486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:57.177928 systemd[1]: kubelet.service: Consumed 260ms CPU time, 111.3M memory peak. Sep 5 00:37:57.838793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834590483.mount: Deactivated successfully. Sep 5 00:37:58.784481 containerd[1597]: time="2025-09-05T00:37:58.784421111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.785254 containerd[1597]: time="2025-09-05T00:37:58.785217915Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 5 00:37:58.786490 containerd[1597]: time="2025-09-05T00:37:58.786405913Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.788110 containerd[1597]: time="2025-09-05T00:37:58.788074030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:58.788624 containerd[1597]: time="2025-09-05T00:37:58.788592042Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.299663308s" Sep 5 00:37:58.788672 containerd[1597]: time="2025-09-05T00:37:58.788623861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 5 00:37:58.789223 containerd[1597]: time="2025-09-05T00:37:58.789191335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:37:59.833275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863648202.mount: Deactivated successfully. Sep 5 00:38:00.536107 containerd[1597]: time="2025-09-05T00:38:00.536024196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:00.536663 containerd[1597]: time="2025-09-05T00:38:00.536617108Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:38:00.537784 containerd[1597]: time="2025-09-05T00:38:00.537728231Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:00.540251 containerd[1597]: time="2025-09-05T00:38:00.540220014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:00.540976 containerd[1597]: time="2025-09-05T00:38:00.540942578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.751715747s" Sep 5 00:38:00.540976 containerd[1597]: time="2025-09-05T00:38:00.540975600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:38:00.541733 containerd[1597]: time="2025-09-05T00:38:00.541700970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:38:01.012121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603854937.mount: Deactivated successfully. Sep 5 00:38:01.017850 containerd[1597]: time="2025-09-05T00:38:01.017798663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:38:01.018637 containerd[1597]: time="2025-09-05T00:38:01.018610365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:38:01.019763 containerd[1597]: time="2025-09-05T00:38:01.019736577Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:38:01.021731 containerd[1597]: time="2025-09-05T00:38:01.021683127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:38:01.022426 containerd[1597]: time="2025-09-05T00:38:01.022388469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 480.649197ms" Sep 5 00:38:01.022501 containerd[1597]: time="2025-09-05T00:38:01.022431380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:38:01.022932 containerd[1597]: time="2025-09-05T00:38:01.022905278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:38:01.523266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988366213.mount: Deactivated successfully. Sep 5 00:38:04.542266 containerd[1597]: time="2025-09-05T00:38:04.542155882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.542916 containerd[1597]: time="2025-09-05T00:38:04.542880461Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 5 00:38:04.544123 containerd[1597]: time="2025-09-05T00:38:04.544091932Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.546565 containerd[1597]: time="2025-09-05T00:38:04.546534492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.547452 containerd[1597]: time="2025-09-05T00:38:04.547413130Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.524478126s" Sep 5 00:38:04.547452 containerd[1597]: time="2025-09-05T00:38:04.547451161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 5 00:38:06.706563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:38:06.706737 systemd[1]: kubelet.service: Consumed 260ms CPU time, 111.3M memory peak. Sep 5 00:38:06.708960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:38:06.734106 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Sep 5 00:38:06.734122 systemd[1]: Reloading... Sep 5 00:38:06.818208 zram_generator::config[2302]: No configuration found. Sep 5 00:38:07.058692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:38:07.177314 systemd[1]: Reloading finished in 442 ms. Sep 5 00:38:07.251886 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:38:07.252013 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:38:07.252380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:38:07.252432 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.2M memory peak. Sep 5 00:38:07.254216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:38:07.440061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:38:07.454461 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:38:07.491235 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:38:07.491235 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:38:07.491235 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:38:07.491659 kubelet[2347]: I0905 00:38:07.491321 2347 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:38:07.905499 kubelet[2347]: I0905 00:38:07.905443 2347 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:38:07.905499 kubelet[2347]: I0905 00:38:07.905486 2347 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:38:07.905759 kubelet[2347]: I0905 00:38:07.905743 2347 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:38:07.923388 kubelet[2347]: E0905 00:38:07.923323 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:07.925414 kubelet[2347]: I0905 00:38:07.925369 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:38:07.931498 kubelet[2347]: I0905 00:38:07.931465 2347 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:38:07.938934 kubelet[2347]: I0905 00:38:07.937991 2347 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:38:07.938934 kubelet[2347]: I0905 00:38:07.938125 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:38:07.938934 kubelet[2347]: I0905 00:38:07.938277 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:38:07.938934 kubelet[2347]: I0905 00:38:07.938315 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:38:07.939183 kubelet[2347]: I0905 00:38:07.938594 2347 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:38:07.939183 kubelet[2347]: I0905 00:38:07.938603 2347 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:38:07.939183 kubelet[2347]: I0905 00:38:07.938742 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:38:07.942526 kubelet[2347]: I0905 00:38:07.942501 2347 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:38:07.942526 kubelet[2347]: I0905 00:38:07.942536 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:38:07.942614 kubelet[2347]: I0905 00:38:07.942573 2347 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:38:07.942614 kubelet[2347]: I0905 00:38:07.942592 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:38:07.943898 kubelet[2347]: W0905 00:38:07.943815 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:07.943963 kubelet[2347]: E0905 00:38:07.943899 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:07.944419 kubelet[2347]: W0905 00:38:07.944360 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:07.944419 kubelet[2347]: E0905 00:38:07.944414 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:07.945366 kubelet[2347]: I0905 00:38:07.945326 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:38:07.945749 kubelet[2347]: I0905 00:38:07.945732 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:38:07.945821 kubelet[2347]: W0905 00:38:07.945800 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:38:07.947918 kubelet[2347]: I0905 00:38:07.947871 2347 server.go:1274] "Started kubelet" Sep 5 00:38:07.948007 kubelet[2347]: I0905 00:38:07.947942 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:38:07.948192 kubelet[2347]: I0905 00:38:07.948094 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:38:07.948727 kubelet[2347]: I0905 00:38:07.948711 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:38:07.949416 kubelet[2347]: I0905 00:38:07.948837 2347 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:38:07.950220 kubelet[2347]: I0905 00:38:07.950193 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:38:07.950748 kubelet[2347]: I0905 00:38:07.950722 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:38:07.952211 kubelet[2347]: E0905 00:38:07.952177 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:07.952262 kubelet[2347]: I0905 00:38:07.952222 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:38:07.952446 kubelet[2347]: I0905 00:38:07.952421 2347 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:38:07.952515 kubelet[2347]: I0905 00:38:07.952496 2347 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:38:07.952823 kubelet[2347]: W0905 00:38:07.952783 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:07.952875 kubelet[2347]: E0905 00:38:07.952825 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:07.953513 kubelet[2347]: E0905 00:38:07.953476 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Sep 5 00:38:07.953554 kubelet[2347]: I0905 00:38:07.953533 2347 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:38:07.953636 kubelet[2347]: I0905 00:38:07.953610 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:38:07.954553 kubelet[2347]: E0905 00:38:07.954327 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:38:07.954553 kubelet[2347]: E0905 00:38:07.953436 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623beba7e19329 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:38:07.947846441 +0000 UTC m=+0.489644254,LastTimestamp:2025-09-05 00:38:07.947846441 +0000 UTC m=+0.489644254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:38:07.955150 kubelet[2347]: I0905 00:38:07.955115 2347 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:38:07.969520 kubelet[2347]: I0905 00:38:07.969449 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:38:07.969622 kubelet[2347]: I0905 00:38:07.969610 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:38:07.969698 kubelet[2347]: I0905 00:38:07.969687 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:38:07.971434 kubelet[2347]: I0905 00:38:07.971390 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:38:07.973367 kubelet[2347]: I0905 00:38:07.973202 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:38:07.973367 kubelet[2347]: I0905 00:38:07.973254 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:38:07.973367 kubelet[2347]: I0905 00:38:07.973291 2347 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:38:07.973367 kubelet[2347]: E0905 00:38:07.973335 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:38:07.973794 kubelet[2347]: W0905 00:38:07.973748 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:07.973850 kubelet[2347]: E0905 00:38:07.973794 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:08.053138 kubelet[2347]: E0905 00:38:08.053091 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:08.074486 kubelet[2347]: E0905 00:38:08.074425 2347 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:38:08.153798 kubelet[2347]: E0905 00:38:08.153743 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:08.154232 kubelet[2347]: E0905 00:38:08.154200 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Sep 5 00:38:08.254682 kubelet[2347]: E0905 00:38:08.254560 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:08.274824 kubelet[2347]: E0905 00:38:08.274758 2347 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:38:08.355298 kubelet[2347]: E0905 00:38:08.355230 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:08.433590 kubelet[2347]: I0905 00:38:08.433543 2347 policy_none.go:49] "None policy: Start" Sep 5 00:38:08.434263 kubelet[2347]: I0905 00:38:08.434230 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:38:08.434263 kubelet[2347]: I0905 00:38:08.434259 2347 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:38:08.443121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:38:08.456188 kubelet[2347]: E0905 00:38:08.456153 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:08.458659 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:38:08.462696 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:38:08.477203 kubelet[2347]: I0905 00:38:08.477184 2347 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:38:08.477431 kubelet[2347]: I0905 00:38:08.477403 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:38:08.477477 kubelet[2347]: I0905 00:38:08.477423 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:38:08.477768 kubelet[2347]: I0905 00:38:08.477753 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:38:08.478899 kubelet[2347]: E0905 00:38:08.478880 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:38:08.555636 kubelet[2347]: E0905 00:38:08.555551 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Sep 5 00:38:08.578517 kubelet[2347]: I0905 00:38:08.578487 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:38:08.578958 kubelet[2347]: E0905 00:38:08.578884 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 5 00:38:08.685578 systemd[1]: Created slice kubepods-burstable-pod7a064baec0e5dadb0c1671c475ed37ce.slice - libcontainer container kubepods-burstable-pod7a064baec0e5dadb0c1671c475ed37ce.slice. Sep 5 00:38:08.706716 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:38:08.710654 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:38:08.757600 kubelet[2347]: I0905 00:38:08.757578 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:08.757680 kubelet[2347]: I0905 00:38:08.757606 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:08.757680 kubelet[2347]: I0905 00:38:08.757625 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:08.757680 kubelet[2347]: I0905 00:38:08.757642 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:08.757680 kubelet[2347]: I0905 00:38:08.757659 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:38:08.757771 kubelet[2347]: I0905 00:38:08.757686 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:08.757771 kubelet[2347]: I0905 00:38:08.757702 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:08.757771 kubelet[2347]: I0905 00:38:08.757726 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:08.757771 kubelet[2347]: I0905 00:38:08.757752 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:08.781114 kubelet[2347]: I0905 00:38:08.781082 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:38:08.781556 kubelet[2347]: E0905 00:38:08.781517 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 5 00:38:08.806151 kubelet[2347]: W0905 00:38:08.806052 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:08.806151 kubelet[2347]: E0905 00:38:08.806105 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:09.004448 kubelet[2347]: E0905 00:38:09.004420 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.005020 containerd[1597]: time="2025-09-05T00:38:09.004974997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a064baec0e5dadb0c1671c475ed37ce,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:09.010137 kubelet[2347]: E0905 00:38:09.010116 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.010506 containerd[1597]: time="2025-09-05T00:38:09.010467315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:09.012695 kubelet[2347]: E0905 00:38:09.012670 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.012939 containerd[1597]: time="2025-09-05T00:38:09.012906909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:09.149819 kubelet[2347]: W0905 00:38:09.149755 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:09.149880 kubelet[2347]: E0905 00:38:09.149835 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:09.183670 kubelet[2347]: I0905 00:38:09.183651 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:38:09.183939 kubelet[2347]: E0905 00:38:09.183904 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 5 00:38:09.278191 containerd[1597]: time="2025-09-05T00:38:09.277625667Z" level=info msg="connecting to shim 306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9" address="unix:///run/containerd/s/bf153d3e5a03ee03c9de4e36da10a245512c8b398ccdea6493276215e9d15eb4" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:09.278191 containerd[1597]: time="2025-09-05T00:38:09.277758205Z" level=info msg="connecting to shim 8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488" address="unix:///run/containerd/s/2dfb0f1c3c70780aa16f9bdd312ad45842e9302525883d983c6cdea6e829ed62" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:09.278391 containerd[1597]: time="2025-09-05T00:38:09.278345777Z" level=info msg="connecting to shim c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674" address="unix:///run/containerd/s/36f67eed968cdbc00f1a7f2595b1d42d5486268735a7fffd326874f22869580b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:09.312308 systemd[1]: Started cri-containerd-306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9.scope - libcontainer container 306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9. Sep 5 00:38:09.317227 systemd[1]: Started cri-containerd-8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488.scope - libcontainer container 8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488. Sep 5 00:38:09.318837 systemd[1]: Started cri-containerd-c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674.scope - libcontainer container c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674. Sep 5 00:38:09.356760 kubelet[2347]: E0905 00:38:09.356698 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="1.6s" Sep 5 00:38:09.372186 containerd[1597]: time="2025-09-05T00:38:09.372101529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9\"" Sep 5 00:38:09.375205 kubelet[2347]: E0905 00:38:09.374640 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.376048 containerd[1597]: time="2025-09-05T00:38:09.375994939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a064baec0e5dadb0c1671c475ed37ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488\"" Sep 5 00:38:09.376676 kubelet[2347]: E0905 00:38:09.376644 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.377279 containerd[1597]: time="2025-09-05T00:38:09.377247147Z" level=info msg="CreateContainer within sandbox \"306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:38:09.378086 containerd[1597]: time="2025-09-05T00:38:09.378059781Z" level=info msg="CreateContainer within sandbox \"8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:38:09.379537 containerd[1597]: time="2025-09-05T00:38:09.379509369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674\"" Sep 5 00:38:09.380502 kubelet[2347]: E0905 00:38:09.380469 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.381752 containerd[1597]: time="2025-09-05T00:38:09.381726606Z" level=info msg="CreateContainer within sandbox \"c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:38:09.389782 containerd[1597]: time="2025-09-05T00:38:09.388948778Z" level=info msg="Container 23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:09.391572 containerd[1597]: time="2025-09-05T00:38:09.391546458Z" level=info msg="Container deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:09.395657 containerd[1597]: time="2025-09-05T00:38:09.395613334Z" level=info msg="CreateContainer within sandbox \"306d793f4a6c4662521d3fba2a8f568e372ae9b7ddfcb3d1adde82655643a2c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990\"" Sep 5 00:38:09.396193 containerd[1597]: time="2025-09-05T00:38:09.396145642Z" level=info msg="StartContainer for \"23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990\"" Sep 5 00:38:09.397211 containerd[1597]: time="2025-09-05T00:38:09.397181764Z" level=info msg="Container bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:09.397492 containerd[1597]: time="2025-09-05T00:38:09.397459505Z" level=info msg="connecting to shim 23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990" address="unix:///run/containerd/s/bf153d3e5a03ee03c9de4e36da10a245512c8b398ccdea6493276215e9d15eb4" protocol=ttrpc version=3 Sep 5 00:38:09.403609 containerd[1597]: time="2025-09-05T00:38:09.403519478Z" level=info msg="CreateContainer within sandbox \"8385c4cdbceb43dd1e1cbe19f80751a44813c918ca91ba9f9d4611b14faa4488\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32\"" Sep 5 00:38:09.404031 containerd[1597]: time="2025-09-05T00:38:09.404001752Z" level=info msg="StartContainer for \"deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32\"" Sep 5 00:38:09.405299 containerd[1597]: time="2025-09-05T00:38:09.405271793Z" level=info msg="connecting to shim deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32" address="unix:///run/containerd/s/2dfb0f1c3c70780aa16f9bdd312ad45842e9302525883d983c6cdea6e829ed62" protocol=ttrpc version=3 Sep 5 00:38:09.406550 containerd[1597]: time="2025-09-05T00:38:09.406486912Z" level=info msg="CreateContainer within sandbox \"c77ca86a4793486e9139e4aab0e1aa9b69c475fd652924d29c939d777942c674\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e\"" Sep 5 00:38:09.407592 containerd[1597]: time="2025-09-05T00:38:09.407564993Z" level=info msg="StartContainer for \"bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e\"" Sep 5 00:38:09.409590 containerd[1597]: time="2025-09-05T00:38:09.409396267Z" level=info msg="connecting to shim bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e" address="unix:///run/containerd/s/36f67eed968cdbc00f1a7f2595b1d42d5486268735a7fffd326874f22869580b" protocol=ttrpc version=3 Sep 5 00:38:09.418345 systemd[1]: Started cri-containerd-23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990.scope - libcontainer container 23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990. Sep 5 00:38:09.431555 kubelet[2347]: W0905 00:38:09.431493 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:09.431659 kubelet[2347]: E0905 00:38:09.431561 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:09.434315 systemd[1]: Started cri-containerd-deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32.scope - libcontainer container deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32. Sep 5 00:38:09.438258 systemd[1]: Started cri-containerd-bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e.scope - libcontainer container bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e. Sep 5 00:38:09.489197 containerd[1597]: time="2025-09-05T00:38:09.486366346Z" level=info msg="StartContainer for \"23c825e797daed6ea5dd4f6be89c3405bb45cb3275c5d1f4a08c8d6cc4827990\" returns successfully" Sep 5 00:38:09.494218 containerd[1597]: time="2025-09-05T00:38:09.494138047Z" level=info msg="StartContainer for \"deb0187570b36646343ef209c293288fa22160e7ac3f08c57159b8394f6a8b32\" returns successfully" Sep 5 00:38:09.504209 containerd[1597]: time="2025-09-05T00:38:09.504147195Z" level=info msg="StartContainer for \"bfc4607228fad21b92a98770b91b401f9bf416348ced881854c4d60a6bdbdd8e\" returns successfully" Sep 5 00:38:09.512016 kubelet[2347]: W0905 00:38:09.511971 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 5 00:38:09.512101 kubelet[2347]: E0905 00:38:09.512019 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:38:09.983446 kubelet[2347]: E0905 00:38:09.983412 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.986590 kubelet[2347]: E0905 00:38:09.983832 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:09.986751 kubelet[2347]: I0905 00:38:09.986738 2347 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:38:09.988101 kubelet[2347]: E0905 00:38:09.988073 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:10.478181 kubelet[2347]: I0905 00:38:10.478123 2347 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:38:10.478548 kubelet[2347]: E0905 00:38:10.478430 2347 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:38:10.492294 kubelet[2347]: E0905 00:38:10.492251 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:10.592906 kubelet[2347]: E0905 00:38:10.592831 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:10.693503 kubelet[2347]: E0905 00:38:10.693450 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:10.794286 kubelet[2347]: E0905 00:38:10.794108 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:10.894894 kubelet[2347]: E0905 00:38:10.894803 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:10.993928 kubelet[2347]: E0905 00:38:10.993890 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:10.995903 kubelet[2347]: E0905 00:38:10.995873 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.097244 kubelet[2347]: E0905 00:38:11.097179 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.197979 kubelet[2347]: E0905 00:38:11.197854 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.298533 kubelet[2347]: E0905 00:38:11.298458 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.399521 kubelet[2347]: E0905 00:38:11.399086 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.500386 kubelet[2347]: E0905 00:38:11.500326 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.601022 kubelet[2347]: E0905 00:38:11.600962 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.701747 kubelet[2347]: E0905 00:38:11.701568 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:38:11.945414 kubelet[2347]: I0905 00:38:11.945331 2347 apiserver.go:52] "Watching apiserver" Sep 5 00:38:11.953430 kubelet[2347]: I0905 00:38:11.953345 2347 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:38:12.384656 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Sep 5 00:38:12.384678 systemd[1]: Reloading... Sep 5 00:38:12.472206 zram_generator::config[2667]: No configuration found. Sep 5 00:38:12.583767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:38:12.610211 kubelet[2347]: E0905 00:38:12.610138 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:12.749941 systemd[1]: Reloading finished in 364 ms. Sep 5 00:38:12.787568 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:38:12.810656 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:38:12.811029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:38:12.811112 systemd[1]: kubelet.service: Consumed 923ms CPU time, 130.5M memory peak. Sep 5 00:38:12.815135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:38:13.139246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:38:13.157487 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:38:13.235965 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:38:13.235965 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:38:13.235965 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:38:13.236948 kubelet[2709]: I0905 00:38:13.236048 2709 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:38:13.247287 kubelet[2709]: I0905 00:38:13.247244 2709 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:38:13.247287 kubelet[2709]: I0905 00:38:13.247276 2709 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:38:13.247552 kubelet[2709]: I0905 00:38:13.247519 2709 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:38:13.249099 kubelet[2709]: I0905 00:38:13.249066 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:38:13.252594 kubelet[2709]: I0905 00:38:13.252540 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:38:13.257011 kubelet[2709]: I0905 00:38:13.256978 2709 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:38:13.264805 kubelet[2709]: I0905 00:38:13.264773 2709 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:38:13.264932 kubelet[2709]: I0905 00:38:13.264914 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:38:13.265105 kubelet[2709]: I0905 00:38:13.265066 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:38:13.265853 kubelet[2709]: I0905 00:38:13.265090 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:38:13.265993 kubelet[2709]: I0905 00:38:13.265865 2709 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:38:13.265993 kubelet[2709]: I0905 00:38:13.265877 2709 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:38:13.265993 kubelet[2709]: I0905 00:38:13.265920 2709 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:38:13.266069 kubelet[2709]: I0905 00:38:13.266043 2709 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:38:13.266069 kubelet[2709]: I0905 00:38:13.266057 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:38:13.266116 kubelet[2709]: I0905 00:38:13.266094 2709 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:38:13.266116 kubelet[2709]: I0905 00:38:13.266112 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:38:13.267467 kubelet[2709]: I0905 00:38:13.267426 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:38:13.267943 kubelet[2709]: I0905 00:38:13.267827 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:38:13.268317 kubelet[2709]: I0905 00:38:13.268294 2709 server.go:1274] "Started kubelet" Sep 5 00:38:13.269240 kubelet[2709]: I0905 00:38:13.269199 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:38:13.270403 kubelet[2709]: I0905 00:38:13.270374 2709 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:38:13.272226 kubelet[2709]: I0905 00:38:13.272113 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:38:13.277076 kubelet[2709]: I0905 00:38:13.276205 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:38:13.279018 kubelet[2709]: I0905 00:38:13.278671 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:38:13.281644 kubelet[2709]: I0905 00:38:13.279962 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:38:13.281644 kubelet[2709]: I0905 00:38:13.280101 2709 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:38:13.281644 kubelet[2709]: I0905 00:38:13.280340 2709 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:38:13.282064 kubelet[2709]: I0905 00:38:13.282033 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:38:13.283204 kubelet[2709]: E0905 00:38:13.282890 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:38:13.291128 kubelet[2709]: I0905 00:38:13.290464 2709 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:38:13.291128 kubelet[2709]: I0905 00:38:13.290735 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:38:13.293204 kubelet[2709]: I0905 00:38:13.293142 2709 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:38:13.318725 kubelet[2709]: I0905 00:38:13.318685 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:38:13.322186 kubelet[2709]: I0905 00:38:13.322035 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:38:13.322186 kubelet[2709]: I0905 00:38:13.322089 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:38:13.322186 kubelet[2709]: I0905 00:38:13.322113 2709 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:38:13.322371 kubelet[2709]: E0905 00:38:13.322347 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:38:13.358658 kubelet[2709]: I0905 00:38:13.358630 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:38:13.358799 kubelet[2709]: I0905 00:38:13.358787 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:38:13.358902 kubelet[2709]: I0905 00:38:13.358891 2709 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:38:13.359094 kubelet[2709]: I0905 00:38:13.359079 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:38:13.359188 kubelet[2709]: I0905 00:38:13.359144 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:38:13.359240 kubelet[2709]: I0905 00:38:13.359231 2709 policy_none.go:49] "None policy: Start" Sep 5 00:38:13.360430 kubelet[2709]: I0905 00:38:13.360413 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:38:13.361501 kubelet[2709]: I0905 00:38:13.360558 2709 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:38:13.361501 kubelet[2709]: I0905 00:38:13.360740 2709 state_mem.go:75] "Updated machine memory state" Sep 5 00:38:13.368084 kubelet[2709]: I0905 00:38:13.368069 2709 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:38:13.368356 kubelet[2709]: I0905 00:38:13.368341 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:38:13.368519 kubelet[2709]: I0905 00:38:13.368479 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:38:13.369547 kubelet[2709]: I0905 00:38:13.369533 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:38:13.387126 sudo[2747]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:38:13.387734 sudo[2747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:38:13.431147 kubelet[2709]: E0905 00:38:13.430992 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:38:13.482600 kubelet[2709]: I0905 00:38:13.482573 2709 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:38:13.489131 kubelet[2709]: I0905 00:38:13.489088 2709 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:38:13.489281 kubelet[2709]: I0905 00:38:13.489201 2709 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:38:13.581504 kubelet[2709]: I0905 00:38:13.581425 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:13.581504 kubelet[2709]: I0905 00:38:13.581488 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:38:13.581504 kubelet[2709]: I0905 00:38:13.581509 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:13.581504 kubelet[2709]: I0905 00:38:13.581523 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:13.581832 kubelet[2709]: I0905 00:38:13.581543 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a064baec0e5dadb0c1671c475ed37ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a064baec0e5dadb0c1671c475ed37ce\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:38:13.581832 kubelet[2709]: I0905 00:38:13.581556 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:13.581832 kubelet[2709]: I0905 00:38:13.581573 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:13.581832 kubelet[2709]: I0905 00:38:13.581585 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:13.581832 kubelet[2709]: I0905 00:38:13.581598 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:38:13.731059 kubelet[2709]: E0905 00:38:13.730627 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:13.731059 kubelet[2709]: E0905 00:38:13.730883 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:13.731897 kubelet[2709]: E0905 00:38:13.731864 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:13.872438 sudo[2747]: pam_unix(sudo:session): session closed for user root Sep 5 00:38:14.266745 kubelet[2709]: I0905 00:38:14.266693 2709 apiserver.go:52] "Watching apiserver" Sep 5 00:38:14.280236 kubelet[2709]: I0905 00:38:14.280200 2709 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:38:14.343493 kubelet[2709]: E0905 00:38:14.343439 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:14.344118 kubelet[2709]: E0905 00:38:14.344033 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:14.344256 kubelet[2709]: E0905 00:38:14.344146 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:14.363434 kubelet[2709]: I0905 00:38:14.363344 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.363299404 podStartE2EDuration="2.363299404s" podCreationTimestamp="2025-09-05 00:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:14.362850532 +0000 UTC m=+1.193445000" watchObservedRunningTime="2025-09-05 00:38:14.363299404 +0000 UTC m=+1.193893872" Sep 5 00:38:14.380076 kubelet[2709]: I0905 00:38:14.380018 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.380006349 podStartE2EDuration="1.380006349s" podCreationTimestamp="2025-09-05 00:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:14.379859033 +0000 UTC m=+1.210453501" watchObservedRunningTime="2025-09-05 00:38:14.380006349 +0000 UTC m=+1.210600817" Sep 5 00:38:14.380076 kubelet[2709]: I0905 00:38:14.380089 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.380085167 podStartE2EDuration="1.380085167s" podCreationTimestamp="2025-09-05 00:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:14.37231056 +0000 UTC m=+1.202905048" watchObservedRunningTime="2025-09-05 00:38:14.380085167 +0000 UTC m=+1.210679635" Sep 5 00:38:15.290500 sudo[1799]: pam_unix(sudo:session): session closed for user root Sep 5 00:38:15.292042 sshd[1798]: Connection closed by 10.0.0.1 port 46348 Sep 5 00:38:15.292534 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:15.297129 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:46348.service: Deactivated successfully. Sep 5 00:38:15.299479 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:38:15.299758 systemd[1]: session-7.scope: Consumed 4.584s CPU time, 261.8M memory peak. Sep 5 00:38:15.301080 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:38:15.302701 systemd-logind[1570]: Removed session 7. Sep 5 00:38:15.344118 kubelet[2709]: E0905 00:38:15.344076 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:17.899893 kubelet[2709]: I0905 00:38:17.899843 2709 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:38:17.900523 containerd[1597]: time="2025-09-05T00:38:17.900248436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:38:17.900869 kubelet[2709]: I0905 00:38:17.900561 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:38:18.526365 kubelet[2709]: E0905 00:38:18.526330 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:18.830844 systemd[1]: Created slice kubepods-besteffort-pod775f22eb_2177_4f86_9170_694c3e744092.slice - libcontainer container kubepods-besteffort-pod775f22eb_2177_4f86_9170_694c3e744092.slice. Sep 5 00:38:18.846395 systemd[1]: Created slice kubepods-burstable-pod3e9188ab_08b2_4d7b_9ede_6bb7aaeb85e2.slice - libcontainer container kubepods-burstable-pod3e9188ab_08b2_4d7b_9ede_6bb7aaeb85e2.slice. Sep 5 00:38:18.912457 kubelet[2709]: I0905 00:38:18.912382 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/775f22eb-2177-4f86-9170-694c3e744092-xtables-lock\") pod \"kube-proxy-42s8g\" (UID: \"775f22eb-2177-4f86-9170-694c3e744092\") " pod="kube-system/kube-proxy-42s8g" Sep 5 00:38:18.912457 kubelet[2709]: I0905 00:38:18.912437 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-clustermesh-secrets\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.912457 kubelet[2709]: I0905 00:38:18.912463 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hostproc\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912486 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-xtables-lock\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912505 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-config-path\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912521 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-run\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912564 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cni-path\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912637 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-lib-modules\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913041 kubelet[2709]: I0905 00:38:18.912677 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-net\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912693 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-cgroup\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912705 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-etc-cni-netd\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912721 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-kernel\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912753 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2jk\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-kube-api-access-jn2jk\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912801 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/775f22eb-2177-4f86-9170-694c3e744092-kube-proxy\") pod \"kube-proxy-42s8g\" (UID: \"775f22eb-2177-4f86-9170-694c3e744092\") " pod="kube-system/kube-proxy-42s8g" Sep 5 00:38:18.913244 kubelet[2709]: I0905 00:38:18.912824 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-bpf-maps\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:18.913391 kubelet[2709]: I0905 00:38:18.912856 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/775f22eb-2177-4f86-9170-694c3e744092-lib-modules\") pod \"kube-proxy-42s8g\" (UID: \"775f22eb-2177-4f86-9170-694c3e744092\") " pod="kube-system/kube-proxy-42s8g" Sep 5 00:38:18.913391 kubelet[2709]: I0905 00:38:18.912881 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxqj7\" (UniqueName: \"kubernetes.io/projected/775f22eb-2177-4f86-9170-694c3e744092-kube-api-access-vxqj7\") pod \"kube-proxy-42s8g\" (UID: \"775f22eb-2177-4f86-9170-694c3e744092\") " pod="kube-system/kube-proxy-42s8g" Sep 5 00:38:18.913391 kubelet[2709]: I0905 00:38:18.912910 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hubble-tls\") pod \"cilium-t8gjb\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " pod="kube-system/cilium-t8gjb" Sep 5 00:38:19.055355 systemd[1]: Created slice kubepods-besteffort-pod2cef6059_07e9_4fff_a462_1542bff93f97.slice - libcontainer container kubepods-besteffort-pod2cef6059_07e9_4fff_a462_1542bff93f97.slice. Sep 5 00:38:19.119873 kubelet[2709]: I0905 00:38:19.119742 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28vx\" (UniqueName: \"kubernetes.io/projected/2cef6059-07e9-4fff-a462-1542bff93f97-kube-api-access-s28vx\") pod \"cilium-operator-5d85765b45-5j5pn\" (UID: \"2cef6059-07e9-4fff-a462-1542bff93f97\") " pod="kube-system/cilium-operator-5d85765b45-5j5pn" Sep 5 00:38:19.119873 kubelet[2709]: I0905 00:38:19.119803 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cef6059-07e9-4fff-a462-1542bff93f97-cilium-config-path\") pod \"cilium-operator-5d85765b45-5j5pn\" (UID: \"2cef6059-07e9-4fff-a462-1542bff93f97\") " pod="kube-system/cilium-operator-5d85765b45-5j5pn" Sep 5 00:38:19.142662 kubelet[2709]: E0905 00:38:19.142640 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:19.143415 containerd[1597]: time="2025-09-05T00:38:19.143365543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42s8g,Uid:775f22eb-2177-4f86-9170-694c3e744092,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:19.150864 kubelet[2709]: E0905 00:38:19.150839 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:19.151404 containerd[1597]: time="2025-09-05T00:38:19.151244217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8gjb,Uid:3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:19.506538 containerd[1597]: time="2025-09-05T00:38:19.506298884Z" level=info msg="connecting to shim b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:19.525296 containerd[1597]: time="2025-09-05T00:38:19.525240237Z" level=info msg="connecting to shim 2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb" address="unix:///run/containerd/s/db772737e9cb90c99ce3a32e9316341e47b7ae70fba3180579a9f15310848769" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:19.541311 systemd[1]: Started cri-containerd-b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0.scope - libcontainer container b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0. Sep 5 00:38:19.548585 systemd[1]: Started cri-containerd-2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb.scope - libcontainer container 2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb. Sep 5 00:38:19.579854 containerd[1597]: time="2025-09-05T00:38:19.579771080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8gjb,Uid:3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\"" Sep 5 00:38:19.581057 kubelet[2709]: E0905 00:38:19.580621 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:19.582929 containerd[1597]: time="2025-09-05T00:38:19.582882256Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:38:19.584331 containerd[1597]: time="2025-09-05T00:38:19.584301203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42s8g,Uid:775f22eb-2177-4f86-9170-694c3e744092,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb\"" Sep 5 00:38:19.585028 kubelet[2709]: E0905 00:38:19.584984 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:19.586873 containerd[1597]: time="2025-09-05T00:38:19.586832233Z" level=info msg="CreateContainer within sandbox \"2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:38:19.599689 containerd[1597]: time="2025-09-05T00:38:19.599647156Z" level=info msg="Container f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:19.610686 containerd[1597]: time="2025-09-05T00:38:19.610642767Z" level=info msg="CreateContainer within sandbox \"2d878caa281ab752bab14267cc6235bdcdc8692ac1a09c5882aaae5d5ce2d2bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653\"" Sep 5 00:38:19.611388 containerd[1597]: time="2025-09-05T00:38:19.611225469Z" level=info msg="StartContainer for \"f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653\"" Sep 5 00:38:19.612711 containerd[1597]: time="2025-09-05T00:38:19.612685435Z" level=info msg="connecting to shim f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653" address="unix:///run/containerd/s/db772737e9cb90c99ce3a32e9316341e47b7ae70fba3180579a9f15310848769" protocol=ttrpc version=3 Sep 5 00:38:19.641307 systemd[1]: Started cri-containerd-f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653.scope - libcontainer container f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653. Sep 5 00:38:19.659002 kubelet[2709]: E0905 00:38:19.658513 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:19.659569 containerd[1597]: time="2025-09-05T00:38:19.659518449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5j5pn,Uid:2cef6059-07e9-4fff-a462-1542bff93f97,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:19.683128 containerd[1597]: time="2025-09-05T00:38:19.683076442Z" level=info msg="connecting to shim b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619" address="unix:///run/containerd/s/08e46a1120b1fa88cc2c5543ebce40fcee7ae4d3d9d5fd59826e6c6975f2b71a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:19.686701 containerd[1597]: time="2025-09-05T00:38:19.686642426Z" level=info msg="StartContainer for \"f0c640f455f05bf7db9067d7b6bb91c33e3ee381da34c8b5eac7e9acb07df653\" returns successfully" Sep 5 00:38:19.719360 systemd[1]: Started cri-containerd-b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619.scope - libcontainer container b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619. Sep 5 00:38:19.769398 containerd[1597]: time="2025-09-05T00:38:19.769250365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5j5pn,Uid:2cef6059-07e9-4fff-a462-1542bff93f97,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\"" Sep 5 00:38:19.770357 kubelet[2709]: E0905 00:38:19.770327 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:20.354829 kubelet[2709]: E0905 00:38:20.354789 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:20.960387 kubelet[2709]: E0905 00:38:20.960351 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:20.974108 kubelet[2709]: I0905 00:38:20.974005 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-42s8g" podStartSLOduration=2.97398387 podStartE2EDuration="2.97398387s" podCreationTimestamp="2025-09-05 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:20.362352784 +0000 UTC m=+7.192947252" watchObservedRunningTime="2025-09-05 00:38:20.97398387 +0000 UTC m=+7.804578328" Sep 5 00:38:21.215466 kubelet[2709]: E0905 00:38:21.215356 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:21.356744 kubelet[2709]: E0905 00:38:21.356697 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:21.357230 kubelet[2709]: E0905 00:38:21.356697 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:22.358306 kubelet[2709]: E0905 00:38:22.358269 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:28.417493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716570199.mount: Deactivated successfully. Sep 5 00:38:28.604498 update_engine[1571]: I20250905 00:38:28.604341 1571 update_attempter.cc:509] Updating boot flags... Sep 5 00:38:28.810566 kubelet[2709]: E0905 00:38:28.808892 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:31.137508 containerd[1597]: time="2025-09-05T00:38:31.137429334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:31.138291 containerd[1597]: time="2025-09-05T00:38:31.138267188Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:38:31.139712 containerd[1597]: time="2025-09-05T00:38:31.139630134Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:31.141076 containerd[1597]: time="2025-09-05T00:38:31.141039889Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.558102307s" Sep 5 00:38:31.141076 containerd[1597]: time="2025-09-05T00:38:31.141072351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:38:31.146448 containerd[1597]: time="2025-09-05T00:38:31.146403778Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:38:31.157804 containerd[1597]: time="2025-09-05T00:38:31.157722875Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:38:31.168013 containerd[1597]: time="2025-09-05T00:38:31.167937242Z" level=info msg="Container 7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:31.176956 containerd[1597]: time="2025-09-05T00:38:31.176881460Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\"" Sep 5 00:38:31.177550 containerd[1597]: time="2025-09-05T00:38:31.177516370Z" level=info msg="StartContainer for \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\"" Sep 5 00:38:31.178463 containerd[1597]: time="2025-09-05T00:38:31.178439595Z" level=info msg="connecting to shim 7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" protocol=ttrpc version=3 Sep 5 00:38:31.238408 systemd[1]: Started cri-containerd-7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910.scope - libcontainer container 7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910. Sep 5 00:38:31.276011 containerd[1597]: time="2025-09-05T00:38:31.275963092Z" level=info msg="StartContainer for \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" returns successfully" Sep 5 00:38:31.291434 systemd[1]: cri-containerd-7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910.scope: Deactivated successfully. Sep 5 00:38:31.292963 containerd[1597]: time="2025-09-05T00:38:31.292915025Z" level=info msg="received exit event container_id:\"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" id:\"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" pid:3149 exited_at:{seconds:1757032711 nanos:292493839}" Sep 5 00:38:31.293093 containerd[1597]: time="2025-09-05T00:38:31.293069909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" id:\"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" pid:3149 exited_at:{seconds:1757032711 nanos:292493839}" Sep 5 00:38:31.318288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910-rootfs.mount: Deactivated successfully. Sep 5 00:38:31.578470 kubelet[2709]: E0905 00:38:31.577945 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:32.581488 kubelet[2709]: E0905 00:38:32.581367 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:32.601733 containerd[1597]: time="2025-09-05T00:38:32.601675921Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:38:32.615134 containerd[1597]: time="2025-09-05T00:38:32.615089884Z" level=info msg="Container 38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:32.624670 containerd[1597]: time="2025-09-05T00:38:32.624634767Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\"" Sep 5 00:38:32.625797 containerd[1597]: time="2025-09-05T00:38:32.625128209Z" level=info msg="StartContainer for \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\"" Sep 5 00:38:32.626101 containerd[1597]: time="2025-09-05T00:38:32.626047496Z" level=info msg="connecting to shim 38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" protocol=ttrpc version=3 Sep 5 00:38:32.655380 systemd[1]: Started cri-containerd-38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694.scope - libcontainer container 38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694. Sep 5 00:38:32.782349 containerd[1597]: time="2025-09-05T00:38:32.782291302Z" level=info msg="StartContainer for \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" returns successfully" Sep 5 00:38:32.792783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275174349.mount: Deactivated successfully. Sep 5 00:38:32.801382 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:38:32.801628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:38:32.802247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:38:32.803816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:38:32.806248 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:38:32.807121 systemd[1]: cri-containerd-38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694.scope: Deactivated successfully. Sep 5 00:38:32.807323 containerd[1597]: time="2025-09-05T00:38:32.807286146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" id:\"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" pid:3193 exited_at:{seconds:1757032712 nanos:806908461}" Sep 5 00:38:32.807403 containerd[1597]: time="2025-09-05T00:38:32.807382537Z" level=info msg="received exit event container_id:\"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" id:\"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" pid:3193 exited_at:{seconds:1757032712 nanos:806908461}" Sep 5 00:38:32.833883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:38:33.104713 containerd[1597]: time="2025-09-05T00:38:33.104607339Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:33.105473 containerd[1597]: time="2025-09-05T00:38:33.105422849Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:38:33.106508 containerd[1597]: time="2025-09-05T00:38:33.106450480Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:33.108358 containerd[1597]: time="2025-09-05T00:38:33.108328586Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.961878681s" Sep 5 00:38:33.108358 containerd[1597]: time="2025-09-05T00:38:33.108360666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:38:33.110223 containerd[1597]: time="2025-09-05T00:38:33.110183239Z" level=info msg="CreateContainer within sandbox \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:38:33.120079 containerd[1597]: time="2025-09-05T00:38:33.120044932Z" level=info msg="Container 1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:33.126181 containerd[1597]: time="2025-09-05T00:38:33.126129673Z" level=info msg="CreateContainer within sandbox \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\"" Sep 5 00:38:33.126591 containerd[1597]: time="2025-09-05T00:38:33.126564142Z" level=info msg="StartContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\"" Sep 5 00:38:33.127356 containerd[1597]: time="2025-09-05T00:38:33.127329899Z" level=info msg="connecting to shim 1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b" address="unix:///run/containerd/s/08e46a1120b1fa88cc2c5543ebce40fcee7ae4d3d9d5fd59826e6c6975f2b71a" protocol=ttrpc version=3 Sep 5 00:38:33.150291 systemd[1]: Started cri-containerd-1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b.scope - libcontainer container 1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b. Sep 5 00:38:33.183493 containerd[1597]: time="2025-09-05T00:38:33.183424108Z" level=info msg="StartContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" returns successfully" Sep 5 00:38:33.586121 kubelet[2709]: E0905 00:38:33.586087 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:33.589944 kubelet[2709]: E0905 00:38:33.589916 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:33.592194 containerd[1597]: time="2025-09-05T00:38:33.591748320Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:38:33.608309 containerd[1597]: time="2025-09-05T00:38:33.608268007Z" level=info msg="Container 3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:33.618119 kubelet[2709]: I0905 00:38:33.618049 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5j5pn" podStartSLOduration=1.280246441 podStartE2EDuration="14.618027026s" podCreationTimestamp="2025-09-05 00:38:19 +0000 UTC" firstStartedPulling="2025-09-05 00:38:19.771024229 +0000 UTC m=+6.601618698" lastFinishedPulling="2025-09-05 00:38:33.108804815 +0000 UTC m=+19.939399283" observedRunningTime="2025-09-05 00:38:33.597412938 +0000 UTC m=+20.428007406" watchObservedRunningTime="2025-09-05 00:38:33.618027026 +0000 UTC m=+20.448621494" Sep 5 00:38:33.620754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694-rootfs.mount: Deactivated successfully. Sep 5 00:38:33.627073 containerd[1597]: time="2025-09-05T00:38:33.627022062Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\"" Sep 5 00:38:33.628583 containerd[1597]: time="2025-09-05T00:38:33.628530220Z" level=info msg="StartContainer for \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\"" Sep 5 00:38:33.631284 containerd[1597]: time="2025-09-05T00:38:33.631247562Z" level=info msg="connecting to shim 3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" protocol=ttrpc version=3 Sep 5 00:38:33.669206 systemd[1]: Started cri-containerd-3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b.scope - libcontainer container 3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b. Sep 5 00:38:33.739087 containerd[1597]: time="2025-09-05T00:38:33.739018149Z" level=info msg="StartContainer for \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" returns successfully" Sep 5 00:38:33.739840 systemd[1]: cri-containerd-3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b.scope: Deactivated successfully. Sep 5 00:38:33.741447 containerd[1597]: time="2025-09-05T00:38:33.741405157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" id:\"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" pid:3291 exited_at:{seconds:1757032713 nanos:740950849}" Sep 5 00:38:33.741523 containerd[1597]: time="2025-09-05T00:38:33.741480128Z" level=info msg="received exit event container_id:\"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" id:\"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" pid:3291 exited_at:{seconds:1757032713 nanos:740950849}" Sep 5 00:38:33.769200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b-rootfs.mount: Deactivated successfully. Sep 5 00:38:34.594909 kubelet[2709]: E0905 00:38:34.594862 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:34.595672 kubelet[2709]: E0905 00:38:34.594930 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:34.598014 containerd[1597]: time="2025-09-05T00:38:34.597938531Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:38:34.610707 containerd[1597]: time="2025-09-05T00:38:34.610646449Z" level=info msg="Container 9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:34.620578 containerd[1597]: time="2025-09-05T00:38:34.620523572Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\"" Sep 5 00:38:34.621135 containerd[1597]: time="2025-09-05T00:38:34.621100611Z" level=info msg="StartContainer for \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\"" Sep 5 00:38:34.622234 containerd[1597]: time="2025-09-05T00:38:34.622183626Z" level=info msg="connecting to shim 9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" protocol=ttrpc version=3 Sep 5 00:38:34.651369 systemd[1]: Started cri-containerd-9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7.scope - libcontainer container 9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7. Sep 5 00:38:34.680865 systemd[1]: cri-containerd-9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7.scope: Deactivated successfully. Sep 5 00:38:34.681586 containerd[1597]: time="2025-09-05T00:38:34.681552901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" id:\"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" pid:3328 exited_at:{seconds:1757032714 nanos:681022409}" Sep 5 00:38:34.683756 containerd[1597]: time="2025-09-05T00:38:34.683703019Z" level=info msg="received exit event container_id:\"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" id:\"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" pid:3328 exited_at:{seconds:1757032714 nanos:681022409}" Sep 5 00:38:34.685791 containerd[1597]: time="2025-09-05T00:38:34.685742299Z" level=info msg="StartContainer for \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" returns successfully" Sep 5 00:38:34.705682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7-rootfs.mount: Deactivated successfully. Sep 5 00:38:35.599927 kubelet[2709]: E0905 00:38:35.599892 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:35.601969 containerd[1597]: time="2025-09-05T00:38:35.601899628Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:38:35.617940 containerd[1597]: time="2025-09-05T00:38:35.617723665Z" level=info msg="Container 7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:35.621375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644028020.mount: Deactivated successfully. Sep 5 00:38:35.625595 containerd[1597]: time="2025-09-05T00:38:35.625544221Z" level=info msg="CreateContainer within sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\"" Sep 5 00:38:35.626062 containerd[1597]: time="2025-09-05T00:38:35.626021251Z" level=info msg="StartContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\"" Sep 5 00:38:35.626891 containerd[1597]: time="2025-09-05T00:38:35.626841438Z" level=info msg="connecting to shim 7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82" address="unix:///run/containerd/s/6126ac32ca91471f33c383907904f2d0b20d751ac466d6cdbcece68efc0db9a6" protocol=ttrpc version=3 Sep 5 00:38:35.649324 systemd[1]: Started cri-containerd-7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82.scope - libcontainer container 7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82. Sep 5 00:38:35.695260 containerd[1597]: time="2025-09-05T00:38:35.695210197Z" level=info msg="StartContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" returns successfully" Sep 5 00:38:35.774267 containerd[1597]: time="2025-09-05T00:38:35.774202079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" id:\"b978ac37520fca78f14eec68f0504bd7d1c617a6e192b0540584944b7df1568f\" pid:3395 exited_at:{seconds:1757032715 nanos:773819458}" Sep 5 00:38:35.866840 kubelet[2709]: I0905 00:38:35.866711 2709 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:38:35.904930 systemd[1]: Created slice kubepods-burstable-podb70e51e4_fadd_4139_ab5d_764d78e03562.slice - libcontainer container kubepods-burstable-podb70e51e4_fadd_4139_ab5d_764d78e03562.slice. Sep 5 00:38:35.912658 systemd[1]: Created slice kubepods-burstable-podd6830763_afee_48e0_a08c_92e0c494cdfb.slice - libcontainer container kubepods-burstable-podd6830763_afee_48e0_a08c_92e0c494cdfb.slice. Sep 5 00:38:35.922690 kubelet[2709]: I0905 00:38:35.922631 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhmc\" (UniqueName: \"kubernetes.io/projected/b70e51e4-fadd-4139-ab5d-764d78e03562-kube-api-access-rkhmc\") pod \"coredns-7c65d6cfc9-5xcgf\" (UID: \"b70e51e4-fadd-4139-ab5d-764d78e03562\") " pod="kube-system/coredns-7c65d6cfc9-5xcgf" Sep 5 00:38:35.922690 kubelet[2709]: I0905 00:38:35.922673 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6g2c\" (UniqueName: \"kubernetes.io/projected/d6830763-afee-48e0-a08c-92e0c494cdfb-kube-api-access-k6g2c\") pod \"coredns-7c65d6cfc9-tccm9\" (UID: \"d6830763-afee-48e0-a08c-92e0c494cdfb\") " pod="kube-system/coredns-7c65d6cfc9-tccm9" Sep 5 00:38:35.922690 kubelet[2709]: I0905 00:38:35.922693 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b70e51e4-fadd-4139-ab5d-764d78e03562-config-volume\") pod \"coredns-7c65d6cfc9-5xcgf\" (UID: \"b70e51e4-fadd-4139-ab5d-764d78e03562\") " pod="kube-system/coredns-7c65d6cfc9-5xcgf" Sep 5 00:38:35.922891 kubelet[2709]: I0905 00:38:35.922720 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6830763-afee-48e0-a08c-92e0c494cdfb-config-volume\") pod \"coredns-7c65d6cfc9-tccm9\" (UID: \"d6830763-afee-48e0-a08c-92e0c494cdfb\") " pod="kube-system/coredns-7c65d6cfc9-tccm9" Sep 5 00:38:36.210424 kubelet[2709]: E0905 00:38:36.210288 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:36.211842 containerd[1597]: time="2025-09-05T00:38:36.211314329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5xcgf,Uid:b70e51e4-fadd-4139-ab5d-764d78e03562,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:36.215293 kubelet[2709]: E0905 00:38:36.215256 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:36.216365 containerd[1597]: time="2025-09-05T00:38:36.216331793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tccm9,Uid:d6830763-afee-48e0-a08c-92e0c494cdfb,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:36.606424 kubelet[2709]: E0905 00:38:36.606391 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:37.608500 kubelet[2709]: E0905 00:38:37.608448 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:37.889792 systemd-networkd[1486]: cilium_host: Link UP Sep 5 00:38:37.893349 systemd-networkd[1486]: cilium_net: Link UP Sep 5 00:38:37.893733 systemd-networkd[1486]: cilium_net: Gained carrier Sep 5 00:38:37.894053 systemd-networkd[1486]: cilium_host: Gained carrier Sep 5 00:38:37.995225 systemd-networkd[1486]: cilium_vxlan: Link UP Sep 5 00:38:37.995409 systemd-networkd[1486]: cilium_vxlan: Gained carrier Sep 5 00:38:38.211214 kernel: NET: Registered PF_ALG protocol family Sep 5 00:38:38.385337 systemd-networkd[1486]: cilium_host: Gained IPv6LL Sep 5 00:38:38.506329 systemd-networkd[1486]: cilium_net: Gained IPv6LL Sep 5 00:38:38.610287 kubelet[2709]: E0905 00:38:38.610235 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:38.879650 systemd-networkd[1486]: lxc_health: Link UP Sep 5 00:38:38.883190 systemd-networkd[1486]: lxc_health: Gained carrier Sep 5 00:38:39.169204 kubelet[2709]: I0905 00:38:39.168559 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t8gjb" podStartSLOduration=9.604698257999999 podStartE2EDuration="21.168538894s" podCreationTimestamp="2025-09-05 00:38:18 +0000 UTC" firstStartedPulling="2025-09-05 00:38:19.58233948 +0000 UTC m=+6.412933948" lastFinishedPulling="2025-09-05 00:38:31.146180115 +0000 UTC m=+17.976774584" observedRunningTime="2025-09-05 00:38:36.755953614 +0000 UTC m=+23.586548082" watchObservedRunningTime="2025-09-05 00:38:39.168538894 +0000 UTC m=+25.999133362" Sep 5 00:38:39.256216 kernel: eth0: renamed from tmp1d2bf Sep 5 00:38:39.257581 systemd-networkd[1486]: lxc66d591130344: Link UP Sep 5 00:38:39.257894 systemd-networkd[1486]: lxc66d591130344: Gained carrier Sep 5 00:38:39.259692 systemd-networkd[1486]: lxcbf06a524de05: Link UP Sep 5 00:38:39.270202 kernel: eth0: renamed from tmp8a1b6 Sep 5 00:38:39.270778 systemd-networkd[1486]: lxcbf06a524de05: Gained carrier Sep 5 00:38:39.611775 kubelet[2709]: E0905 00:38:39.611733 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:39.977548 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Sep 5 00:38:40.169510 systemd-networkd[1486]: lxc_health: Gained IPv6LL Sep 5 00:38:40.553398 systemd-networkd[1486]: lxc66d591130344: Gained IPv6LL Sep 5 00:38:40.613947 kubelet[2709]: E0905 00:38:40.613913 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:40.681786 systemd-networkd[1486]: lxcbf06a524de05: Gained IPv6LL Sep 5 00:38:41.484792 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). Sep 5 00:38:41.548813 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:38:41.550624 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:41.556375 systemd-logind[1570]: New session 8 of user core. Sep 5 00:38:41.562314 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:38:41.616169 kubelet[2709]: E0905 00:38:41.616116 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:41.725999 sshd[3874]: Connection closed by 10.0.0.1 port 58514 Sep 5 00:38:41.726412 sshd-session[3872]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:41.730951 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:58514.service: Deactivated successfully. Sep 5 00:38:41.733775 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:38:41.736324 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:38:41.737654 systemd-logind[1570]: Removed session 8. Sep 5 00:38:42.737403 containerd[1597]: time="2025-09-05T00:38:42.737328619Z" level=info msg="connecting to shim 8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a" address="unix:///run/containerd/s/6bdbea1a7d3756e2202c59664a9152d34c487015b413038097f708c4d21f6f7a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:42.740342 containerd[1597]: time="2025-09-05T00:38:42.740228629Z" level=info msg="connecting to shim 1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc" address="unix:///run/containerd/s/904fd344aaa5ecc9a38cf6808e346f5072393fc34f3b27e25e9e8a6fb4a3da9a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:42.782320 systemd[1]: Started cri-containerd-1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc.scope - libcontainer container 1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc. Sep 5 00:38:42.785693 systemd[1]: Started cri-containerd-8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a.scope - libcontainer container 8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a. Sep 5 00:38:42.798550 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:42.802528 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:42.842821 containerd[1597]: time="2025-09-05T00:38:42.842762831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5xcgf,Uid:b70e51e4-fadd-4139-ab5d-764d78e03562,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a\"" Sep 5 00:38:42.843746 kubelet[2709]: E0905 00:38:42.843704 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:42.848102 containerd[1597]: time="2025-09-05T00:38:42.848073027Z" level=info msg="CreateContainer within sandbox \"8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:42.859244 containerd[1597]: time="2025-09-05T00:38:42.859189574Z" level=info msg="Container f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:42.867851 containerd[1597]: time="2025-09-05T00:38:42.867802688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tccm9,Uid:d6830763-afee-48e0-a08c-92e0c494cdfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc\"" Sep 5 00:38:42.868679 kubelet[2709]: E0905 00:38:42.868649 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:42.871344 containerd[1597]: time="2025-09-05T00:38:42.871287137Z" level=info msg="CreateContainer within sandbox \"8a1b65b88dfe042e88d237464c3dcfa3865ae9eff7693aaf62c62cbb6b0c975a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e\"" Sep 5 00:38:42.871489 containerd[1597]: time="2025-09-05T00:38:42.871434154Z" level=info msg="CreateContainer within sandbox \"1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:42.871792 containerd[1597]: time="2025-09-05T00:38:42.871753015Z" level=info msg="StartContainer for \"f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e\"" Sep 5 00:38:42.872744 containerd[1597]: time="2025-09-05T00:38:42.872707582Z" level=info msg="connecting to shim f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e" address="unix:///run/containerd/s/6bdbea1a7d3756e2202c59664a9152d34c487015b413038097f708c4d21f6f7a" protocol=ttrpc version=3 Sep 5 00:38:42.880567 containerd[1597]: time="2025-09-05T00:38:42.880531681Z" level=info msg="Container a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:42.897761 containerd[1597]: time="2025-09-05T00:38:42.897709147Z" level=info msg="CreateContainer within sandbox \"1d2bf140b48d255ab0aeba630971c9a53fa4bee5870c5e2f8413de62546e37fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505\"" Sep 5 00:38:42.898315 containerd[1597]: time="2025-09-05T00:38:42.898291174Z" level=info msg="StartContainer for \"a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505\"" Sep 5 00:38:42.899150 containerd[1597]: time="2025-09-05T00:38:42.899083555Z" level=info msg="connecting to shim a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505" address="unix:///run/containerd/s/904fd344aaa5ecc9a38cf6808e346f5072393fc34f3b27e25e9e8a6fb4a3da9a" protocol=ttrpc version=3 Sep 5 00:38:42.902311 systemd[1]: Started cri-containerd-f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e.scope - libcontainer container f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e. Sep 5 00:38:42.925310 systemd[1]: Started cri-containerd-a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505.scope - libcontainer container a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505. Sep 5 00:38:42.959476 containerd[1597]: time="2025-09-05T00:38:42.959420911Z" level=info msg="StartContainer for \"f260358a7f7cdd9e2bd4c4d0d2e08c1c4f24f3b52402787c11ee1036357bca1e\" returns successfully" Sep 5 00:38:42.964721 containerd[1597]: time="2025-09-05T00:38:42.964638363Z" level=info msg="StartContainer for \"a909cae220be731b14b7087d3ca9cb43be534b1e3d8fa071e58d98b740d3f505\" returns successfully" Sep 5 00:38:43.621811 kubelet[2709]: E0905 00:38:43.621773 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:43.626629 kubelet[2709]: E0905 00:38:43.626595 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:43.643639 kubelet[2709]: I0905 00:38:43.643564 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5xcgf" podStartSLOduration=24.643537437 podStartE2EDuration="24.643537437s" podCreationTimestamp="2025-09-05 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:43.641825826 +0000 UTC m=+30.472420284" watchObservedRunningTime="2025-09-05 00:38:43.643537437 +0000 UTC m=+30.474131915" Sep 5 00:38:43.643844 kubelet[2709]: I0905 00:38:43.643669 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tccm9" podStartSLOduration=24.643665719 podStartE2EDuration="24.643665719s" podCreationTimestamp="2025-09-05 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:43.632101354 +0000 UTC m=+30.462695822" watchObservedRunningTime="2025-09-05 00:38:43.643665719 +0000 UTC m=+30.474260187" Sep 5 00:38:44.628915 kubelet[2709]: E0905 00:38:44.628863 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:44.629394 kubelet[2709]: E0905 00:38:44.628933 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:45.630384 kubelet[2709]: E0905 00:38:45.630344 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:45.630857 kubelet[2709]: E0905 00:38:45.630399 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:46.746415 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:58530.service - OpenSSH per-connection server daemon (10.0.0.1:58530). Sep 5 00:38:46.812122 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 58530 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:38:46.813851 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:46.818992 systemd-logind[1570]: New session 9 of user core. Sep 5 00:38:46.828334 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:38:46.957327 sshd[4067]: Connection closed by 10.0.0.1 port 58530 Sep 5 00:38:46.957686 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:46.962410 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:58530.service: Deactivated successfully. Sep 5 00:38:46.964639 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:38:46.965612 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:38:46.966891 systemd-logind[1570]: Removed session 9. Sep 5 00:38:51.969706 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:35360.service - OpenSSH per-connection server daemon (10.0.0.1:35360). Sep 5 00:38:52.026247 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 35360 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:38:52.027801 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:52.032026 systemd-logind[1570]: New session 10 of user core. Sep 5 00:38:52.040541 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:38:52.154318 sshd[4086]: Connection closed by 10.0.0.1 port 35360 Sep 5 00:38:52.154640 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:52.157885 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:35360.service: Deactivated successfully. Sep 5 00:38:52.159908 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:38:52.162239 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:38:52.163326 systemd-logind[1570]: Removed session 10. Sep 5 00:38:57.171312 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:35366.service - OpenSSH per-connection server daemon (10.0.0.1:35366). Sep 5 00:38:57.213937 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 35366 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:38:57.215598 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:57.220193 systemd-logind[1570]: New session 11 of user core. Sep 5 00:38:57.230297 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:38:57.353825 sshd[4103]: Connection closed by 10.0.0.1 port 35366 Sep 5 00:38:57.354201 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:57.359311 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:35366.service: Deactivated successfully. Sep 5 00:38:57.361801 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:38:57.362880 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:38:57.364624 systemd-logind[1570]: Removed session 11. Sep 5 00:39:02.369365 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:42180.service - OpenSSH per-connection server daemon (10.0.0.1:42180). Sep 5 00:39:02.422193 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 42180 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:02.423497 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:02.427942 systemd-logind[1570]: New session 12 of user core. Sep 5 00:39:02.438297 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:39:02.580873 sshd[4119]: Connection closed by 10.0.0.1 port 42180 Sep 5 00:39:02.581287 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:02.593897 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:42180.service: Deactivated successfully. Sep 5 00:39:02.595997 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:39:02.596776 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:39:02.599611 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:42186.service - OpenSSH per-connection server daemon (10.0.0.1:42186). Sep 5 00:39:02.600660 systemd-logind[1570]: Removed session 12. Sep 5 00:39:02.655087 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 42186 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:02.656991 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:02.661719 systemd-logind[1570]: New session 13 of user core. Sep 5 00:39:02.672301 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:39:02.823752 sshd[4136]: Connection closed by 10.0.0.1 port 42186 Sep 5 00:39:02.825713 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:02.837487 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:42186.service: Deactivated successfully. Sep 5 00:39:02.839985 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:39:02.840873 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:39:02.843946 systemd-logind[1570]: Removed session 13. Sep 5 00:39:02.845783 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:42194.service - OpenSSH per-connection server daemon (10.0.0.1:42194). Sep 5 00:39:02.900551 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 42194 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:02.901871 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:02.906197 systemd-logind[1570]: New session 14 of user core. Sep 5 00:39:02.920278 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:39:03.034223 sshd[4150]: Connection closed by 10.0.0.1 port 42194 Sep 5 00:39:03.034510 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:03.038579 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:42194.service: Deactivated successfully. Sep 5 00:39:03.040663 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:39:03.041465 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:39:03.042628 systemd-logind[1570]: Removed session 14. Sep 5 00:39:08.051010 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:42200.service - OpenSSH per-connection server daemon (10.0.0.1:42200). Sep 5 00:39:08.111218 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 42200 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:08.112616 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:08.116944 systemd-logind[1570]: New session 15 of user core. Sep 5 00:39:08.130304 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:39:08.244099 sshd[4166]: Connection closed by 10.0.0.1 port 42200 Sep 5 00:39:08.244442 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:08.248344 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:42200.service: Deactivated successfully. Sep 5 00:39:08.250346 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:39:08.251151 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:39:08.252421 systemd-logind[1570]: Removed session 15. Sep 5 00:39:13.257589 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:54086.service - OpenSSH per-connection server daemon (10.0.0.1:54086). Sep 5 00:39:13.311935 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 54086 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:13.313469 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.317936 systemd-logind[1570]: New session 16 of user core. Sep 5 00:39:13.329298 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:39:13.443390 sshd[4182]: Connection closed by 10.0.0.1 port 54086 Sep 5 00:39:13.443805 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:13.455290 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:54086.service: Deactivated successfully. Sep 5 00:39:13.457488 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:39:13.458507 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:39:13.461677 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:54098.service - OpenSSH per-connection server daemon (10.0.0.1:54098). Sep 5 00:39:13.462520 systemd-logind[1570]: Removed session 16. Sep 5 00:39:13.514243 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 54098 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:13.516125 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.520940 systemd-logind[1570]: New session 17 of user core. Sep 5 00:39:13.541333 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:39:13.768050 sshd[4198]: Connection closed by 10.0.0.1 port 54098 Sep 5 00:39:13.768419 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:13.777673 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:54098.service: Deactivated successfully. Sep 5 00:39:13.779967 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:39:13.780811 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:39:13.785013 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:54102.service - OpenSSH per-connection server daemon (10.0.0.1:54102). Sep 5 00:39:13.785747 systemd-logind[1570]: Removed session 17. Sep 5 00:39:13.848083 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 54102 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:13.849585 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.854931 systemd-logind[1570]: New session 18 of user core. Sep 5 00:39:13.863317 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:39:15.307960 sshd[4211]: Connection closed by 10.0.0.1 port 54102 Sep 5 00:39:15.309832 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:15.318786 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:54102.service: Deactivated successfully. Sep 5 00:39:15.321081 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:39:15.321859 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:39:15.325192 systemd-logind[1570]: Removed session 18. Sep 5 00:39:15.327697 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:54118.service - OpenSSH per-connection server daemon (10.0.0.1:54118). Sep 5 00:39:15.379998 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 54118 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:15.381484 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:15.386966 systemd-logind[1570]: New session 19 of user core. Sep 5 00:39:15.394305 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:39:15.616318 sshd[4236]: Connection closed by 10.0.0.1 port 54118 Sep 5 00:39:15.616963 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:15.628699 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:54118.service: Deactivated successfully. Sep 5 00:39:15.630919 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:39:15.631829 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:39:15.635506 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:54122.service - OpenSSH per-connection server daemon (10.0.0.1:54122). Sep 5 00:39:15.636230 systemd-logind[1570]: Removed session 19. Sep 5 00:39:15.694410 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 54122 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:15.696267 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:15.700886 systemd-logind[1570]: New session 20 of user core. Sep 5 00:39:15.715501 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:39:15.827184 sshd[4250]: Connection closed by 10.0.0.1 port 54122 Sep 5 00:39:15.827969 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:15.832657 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:39:15.833354 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:54122.service: Deactivated successfully. Sep 5 00:39:15.836660 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:39:15.840981 systemd-logind[1570]: Removed session 20. Sep 5 00:39:20.839294 systemd[1]: Started sshd@20-10.0.0.129:22-10.0.0.1:41836.service - OpenSSH per-connection server daemon (10.0.0.1:41836). Sep 5 00:39:20.904876 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 41836 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:20.906605 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:20.911282 systemd-logind[1570]: New session 21 of user core. Sep 5 00:39:20.919389 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:39:21.037627 sshd[4267]: Connection closed by 10.0.0.1 port 41836 Sep 5 00:39:21.037984 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:21.043344 systemd[1]: sshd@20-10.0.0.129:22-10.0.0.1:41836.service: Deactivated successfully. Sep 5 00:39:21.046121 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:39:21.047017 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:39:21.048984 systemd-logind[1570]: Removed session 21. Sep 5 00:39:24.331527 kubelet[2709]: E0905 00:39:24.331469 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:26.056707 systemd[1]: Started sshd@21-10.0.0.129:22-10.0.0.1:41838.service - OpenSSH per-connection server daemon (10.0.0.1:41838). Sep 5 00:39:26.110079 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 41838 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:26.111625 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:26.116143 systemd-logind[1570]: New session 22 of user core. Sep 5 00:39:26.130321 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:39:26.243237 sshd[4286]: Connection closed by 10.0.0.1 port 41838 Sep 5 00:39:26.243586 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:26.247946 systemd[1]: sshd@21-10.0.0.129:22-10.0.0.1:41838.service: Deactivated successfully. Sep 5 00:39:26.249783 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:39:26.250669 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:39:26.251924 systemd-logind[1570]: Removed session 22. Sep 5 00:39:26.322843 kubelet[2709]: E0905 00:39:26.322698 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:31.257063 systemd[1]: Started sshd@22-10.0.0.129:22-10.0.0.1:36278.service - OpenSSH per-connection server daemon (10.0.0.1:36278). Sep 5 00:39:31.310649 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 36278 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:31.312830 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:31.318369 systemd-logind[1570]: New session 23 of user core. Sep 5 00:39:31.328339 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:39:31.443457 sshd[4301]: Connection closed by 10.0.0.1 port 36278 Sep 5 00:39:31.443808 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:31.448489 systemd[1]: sshd@22-10.0.0.129:22-10.0.0.1:36278.service: Deactivated successfully. Sep 5 00:39:31.450504 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:39:31.451262 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:39:31.452678 systemd-logind[1570]: Removed session 23. Sep 5 00:39:34.323749 kubelet[2709]: E0905 00:39:34.323696 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:36.456214 systemd[1]: Started sshd@23-10.0.0.129:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282). Sep 5 00:39:36.511335 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:36.512856 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:36.517864 systemd-logind[1570]: New session 24 of user core. Sep 5 00:39:36.534316 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:39:36.644378 sshd[4317]: Connection closed by 10.0.0.1 port 36282 Sep 5 00:39:36.644714 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:36.655076 systemd[1]: sshd@23-10.0.0.129:22-10.0.0.1:36282.service: Deactivated successfully. Sep 5 00:39:36.657227 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:39:36.658003 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:39:36.661330 systemd[1]: Started sshd@24-10.0.0.129:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). Sep 5 00:39:36.661969 systemd-logind[1570]: Removed session 24. Sep 5 00:39:36.709382 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:36.710757 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:36.715638 systemd-logind[1570]: New session 25 of user core. Sep 5 00:39:36.724297 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:39:38.253522 containerd[1597]: time="2025-09-05T00:39:38.253470346Z" level=info msg="StopContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" with timeout 30 (s)" Sep 5 00:39:38.254671 containerd[1597]: time="2025-09-05T00:39:38.254616307Z" level=info msg="Stop container \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" with signal terminated" Sep 5 00:39:38.267204 systemd[1]: cri-containerd-1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b.scope: Deactivated successfully. Sep 5 00:39:38.269431 containerd[1597]: time="2025-09-05T00:39:38.269398188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" id:\"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" pid:3255 exited_at:{seconds:1757032778 nanos:269021087}" Sep 5 00:39:38.269514 containerd[1597]: time="2025-09-05T00:39:38.269467630Z" level=info msg="received exit event container_id:\"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" id:\"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" pid:3255 exited_at:{seconds:1757032778 nanos:269021087}" Sep 5 00:39:38.290919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b-rootfs.mount: Deactivated successfully. Sep 5 00:39:38.304645 containerd[1597]: time="2025-09-05T00:39:38.304607422Z" level=info msg="StopContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" returns successfully" Sep 5 00:39:38.307968 containerd[1597]: time="2025-09-05T00:39:38.307924945Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:39:38.308757 containerd[1597]: time="2025-09-05T00:39:38.308703793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" id:\"1e1a54b98b06deb04c897323bb3e46fa4ba4fa3cd0f0038600aa963c6722945f\" pid:4369 exited_at:{seconds:1757032778 nanos:308229377}" Sep 5 00:39:38.310323 containerd[1597]: time="2025-09-05T00:39:38.310295176Z" level=info msg="StopPodSandbox for \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\"" Sep 5 00:39:38.310402 containerd[1597]: time="2025-09-05T00:39:38.310381821Z" level=info msg="Container to stop \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.312196 containerd[1597]: time="2025-09-05T00:39:38.311816454Z" level=info msg="StopContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" with timeout 2 (s)" Sep 5 00:39:38.312196 containerd[1597]: time="2025-09-05T00:39:38.312098905Z" level=info msg="Stop container \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" with signal terminated" Sep 5 00:39:38.319989 systemd[1]: cri-containerd-b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619.scope: Deactivated successfully. Sep 5 00:39:38.320329 systemd-networkd[1486]: lxc_health: Link DOWN Sep 5 00:39:38.320334 systemd-networkd[1486]: lxc_health: Lost carrier Sep 5 00:39:38.329192 containerd[1597]: time="2025-09-05T00:39:38.327875858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" id:\"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" pid:2951 exit_status:137 exited_at:{seconds:1757032778 nanos:327505179}" Sep 5 00:39:38.339685 systemd[1]: cri-containerd-7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82.scope: Deactivated successfully. Sep 5 00:39:38.340062 systemd[1]: cri-containerd-7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82.scope: Consumed 6.642s CPU time, 130.2M memory peak, 260K read from disk, 13.3M written to disk. Sep 5 00:39:38.341954 containerd[1597]: time="2025-09-05T00:39:38.341896502Z" level=info msg="received exit event container_id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" pid:3365 exited_at:{seconds:1757032778 nanos:341553647}" Sep 5 00:39:38.361624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619-rootfs.mount: Deactivated successfully. Sep 5 00:39:38.366065 containerd[1597]: time="2025-09-05T00:39:38.365941663Z" level=info msg="shim disconnected" id=b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619 namespace=k8s.io Sep 5 00:39:38.366065 containerd[1597]: time="2025-09-05T00:39:38.365982641Z" level=warning msg="cleaning up after shim disconnected" id=b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619 namespace=k8s.io Sep 5 00:39:38.368692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82-rootfs.mount: Deactivated successfully. Sep 5 00:39:38.404497 containerd[1597]: time="2025-09-05T00:39:38.365993232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:39:38.404695 containerd[1597]: time="2025-09-05T00:39:38.402627410Z" level=info msg="StopContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" returns successfully" Sep 5 00:39:38.405488 containerd[1597]: time="2025-09-05T00:39:38.405351358Z" level=info msg="StopPodSandbox for \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\"" Sep 5 00:39:38.405555 containerd[1597]: time="2025-09-05T00:39:38.405432533Z" level=info msg="Container to stop \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.405609 containerd[1597]: time="2025-09-05T00:39:38.405595785Z" level=info msg="Container to stop \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.405660 containerd[1597]: time="2025-09-05T00:39:38.405648576Z" level=info msg="Container to stop \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.405711 containerd[1597]: time="2025-09-05T00:39:38.405698512Z" level=info msg="Container to stop \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.405763 containerd[1597]: time="2025-09-05T00:39:38.405751894Z" level=info msg="Container to stop \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:39:38.412718 systemd[1]: cri-containerd-b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0.scope: Deactivated successfully. Sep 5 00:39:38.424060 kubelet[2709]: E0905 00:39:38.423976 2709 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:39:38.430793 containerd[1597]: time="2025-09-05T00:39:38.430666025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" id:\"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" pid:3365 exited_at:{seconds:1757032778 nanos:341553647}" Sep 5 00:39:38.430793 containerd[1597]: time="2025-09-05T00:39:38.430710150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" id:\"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" pid:2859 exit_status:137 exited_at:{seconds:1757032778 nanos:413737952}" Sep 5 00:39:38.432980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619-shm.mount: Deactivated successfully. Sep 5 00:39:38.438202 containerd[1597]: time="2025-09-05T00:39:38.438116950Z" level=info msg="received exit event sandbox_id:\"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" exit_status:137 exited_at:{seconds:1757032778 nanos:327505179}" Sep 5 00:39:38.443770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0-rootfs.mount: Deactivated successfully. Sep 5 00:39:38.446284 containerd[1597]: time="2025-09-05T00:39:38.446226414Z" level=info msg="TearDown network for sandbox \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" successfully" Sep 5 00:39:38.446284 containerd[1597]: time="2025-09-05T00:39:38.446282071Z" level=info msg="StopPodSandbox for \"b8559101b0a074a9f8ea99222aa25f077ce02920ee9fec0aa78e17cb78b95619\" returns successfully" Sep 5 00:39:38.447897 containerd[1597]: time="2025-09-05T00:39:38.447864786Z" level=info msg="shim disconnected" id=b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0 namespace=k8s.io Sep 5 00:39:38.447897 containerd[1597]: time="2025-09-05T00:39:38.447894403Z" level=warning msg="cleaning up after shim disconnected" id=b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0 namespace=k8s.io Sep 5 00:39:38.447972 containerd[1597]: time="2025-09-05T00:39:38.447901757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:39:38.448022 containerd[1597]: time="2025-09-05T00:39:38.447986088Z" level=info msg="received exit event sandbox_id:\"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" exit_status:137 exited_at:{seconds:1757032778 nanos:413737952}" Sep 5 00:39:38.450055 containerd[1597]: time="2025-09-05T00:39:38.449114445Z" level=info msg="TearDown network for sandbox \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" successfully" Sep 5 00:39:38.450055 containerd[1597]: time="2025-09-05T00:39:38.449147829Z" level=info msg="StopPodSandbox for \"b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0\" returns successfully" Sep 5 00:39:38.478628 kubelet[2709]: I0905 00:39:38.478580 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-run\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.478831 kubelet[2709]: I0905 00:39:38.478644 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn2jk\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-kube-api-access-jn2jk\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.478831 kubelet[2709]: I0905 00:39:38.478672 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.478831 kubelet[2709]: I0905 00:39:38.478706 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hubble-tls\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.478831 kubelet[2709]: I0905 00:39:38.478741 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.482700 kubelet[2709]: I0905 00:39:38.482656 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-kube-api-access-jn2jk" (OuterVolumeSpecName: "kube-api-access-jn2jk") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "kube-api-access-jn2jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:39:38.482700 kubelet[2709]: I0905 00:39:38.482689 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.578978 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hostproc\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.579008 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-net\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.579024 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-lib-modules\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.579043 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cef6059-07e9-4fff-a462-1542bff93f97-cilium-config-path\") pod \"2cef6059-07e9-4fff-a462-1542bff93f97\" (UID: \"2cef6059-07e9-4fff-a462-1542bff93f97\") " Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.579059 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-config-path\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579146 kubelet[2709]: I0905 00:39:38.579060 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.579511 kubelet[2709]: I0905 00:39:38.579071 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cni-path\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579511 kubelet[2709]: I0905 00:39:38.579082 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.579511 kubelet[2709]: I0905 00:39:38.579088 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.579511 kubelet[2709]: I0905 00:39:38.579098 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-cgroup\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579511 kubelet[2709]: I0905 00:39:38.579111 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579127 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s28vx\" (UniqueName: \"kubernetes.io/projected/2cef6059-07e9-4fff-a462-1542bff93f97-kube-api-access-s28vx\") pod \"2cef6059-07e9-4fff-a462-1542bff93f97\" (UID: \"2cef6059-07e9-4fff-a462-1542bff93f97\") " Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579152 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-xtables-lock\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579191 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-clustermesh-secrets\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579220 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-kernel\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579234 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-etc-cni-netd\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579632 kubelet[2709]: I0905 00:39:38.579247 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-bpf-maps\") pod \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\" (UID: \"3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2\") " Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579294 2709 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579304 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn2jk\" (UniqueName: \"kubernetes.io/projected/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-kube-api-access-jn2jk\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579313 2709 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579320 2709 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579327 2709 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579339 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.579774 kubelet[2709]: I0905 00:39:38.579356 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.579930 kubelet[2709]: I0905 00:39:38.579370 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.581957 kubelet[2709]: I0905 00:39:38.581923 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.582176 kubelet[2709]: I0905 00:39:38.582088 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.582247 kubelet[2709]: I0905 00:39:38.582177 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 00:39:38.582319 kubelet[2709]: I0905 00:39:38.582190 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cef6059-07e9-4fff-a462-1542bff93f97-kube-api-access-s28vx" (OuterVolumeSpecName: "kube-api-access-s28vx") pod "2cef6059-07e9-4fff-a462-1542bff93f97" (UID: "2cef6059-07e9-4fff-a462-1542bff93f97"). InnerVolumeSpecName "kube-api-access-s28vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:39:38.582877 kubelet[2709]: I0905 00:39:38.582859 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cef6059-07e9-4fff-a462-1542bff93f97-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2cef6059-07e9-4fff-a462-1542bff93f97" (UID: "2cef6059-07e9-4fff-a462-1542bff93f97"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:39:38.582933 kubelet[2709]: I0905 00:39:38.582890 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:39:38.584399 kubelet[2709]: I0905 00:39:38.584377 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" (UID: "3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:39:38.679903 kubelet[2709]: I0905 00:39:38.679870 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.679903 kubelet[2709]: I0905 00:39:38.679903 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cef6059-07e9-4fff-a462-1542bff93f97-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.679903 kubelet[2709]: I0905 00:39:38.679912 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s28vx\" (UniqueName: \"kubernetes.io/projected/2cef6059-07e9-4fff-a462-1542bff93f97-kube-api-access-s28vx\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679921 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679932 2709 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679940 2709 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679948 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679958 2709 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.680072 kubelet[2709]: I0905 00:39:38.679966 2709 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 5 00:39:38.737400 kubelet[2709]: I0905 00:39:38.737362 2709 scope.go:117] "RemoveContainer" containerID="1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b" Sep 5 00:39:38.738637 containerd[1597]: time="2025-09-05T00:39:38.738599543Z" level=info msg="RemoveContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\"" Sep 5 00:39:38.743413 systemd[1]: Removed slice kubepods-besteffort-pod2cef6059_07e9_4fff_a462_1542bff93f97.slice - libcontainer container kubepods-besteffort-pod2cef6059_07e9_4fff_a462_1542bff93f97.slice. Sep 5 00:39:38.749351 systemd[1]: Removed slice kubepods-burstable-pod3e9188ab_08b2_4d7b_9ede_6bb7aaeb85e2.slice - libcontainer container kubepods-burstable-pod3e9188ab_08b2_4d7b_9ede_6bb7aaeb85e2.slice. Sep 5 00:39:38.749476 systemd[1]: kubepods-burstable-pod3e9188ab_08b2_4d7b_9ede_6bb7aaeb85e2.slice: Consumed 6.762s CPU time, 130.5M memory peak, 268K read from disk, 13.3M written to disk. Sep 5 00:39:38.999743 containerd[1597]: time="2025-09-05T00:39:38.999684365Z" level=info msg="RemoveContainer for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" returns successfully" Sep 5 00:39:39.000088 kubelet[2709]: I0905 00:39:39.000051 2709 scope.go:117] "RemoveContainer" containerID="1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b" Sep 5 00:39:39.007035 containerd[1597]: time="2025-09-05T00:39:39.000337113Z" level=error msg="ContainerStatus for \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\": not found" Sep 5 00:39:39.011413 kubelet[2709]: E0905 00:39:39.011375 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\": not found" containerID="1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b" Sep 5 00:39:39.011488 kubelet[2709]: I0905 00:39:39.011421 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b"} err="failed to get container status \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1317479fc67c350fcdddbf8817c32e1fe39e54dc3e760959690790c6cbc6f66b\": not found" Sep 5 00:39:39.011525 kubelet[2709]: I0905 00:39:39.011492 2709 scope.go:117] "RemoveContainer" containerID="7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82" Sep 5 00:39:39.013721 containerd[1597]: time="2025-09-05T00:39:39.013682484Z" level=info msg="RemoveContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\"" Sep 5 00:39:39.018491 containerd[1597]: time="2025-09-05T00:39:39.018454644Z" level=info msg="RemoveContainer for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" returns successfully" Sep 5 00:39:39.018677 kubelet[2709]: I0905 00:39:39.018641 2709 scope.go:117] "RemoveContainer" containerID="9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7" Sep 5 00:39:39.019913 containerd[1597]: time="2025-09-05T00:39:39.019870740Z" level=info msg="RemoveContainer for \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\"" Sep 5 00:39:39.028708 containerd[1597]: time="2025-09-05T00:39:39.028672739Z" level=info msg="RemoveContainer for \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" returns successfully" Sep 5 00:39:39.028842 kubelet[2709]: I0905 00:39:39.028825 2709 scope.go:117] "RemoveContainer" containerID="3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b" Sep 5 00:39:39.072220 containerd[1597]: time="2025-09-05T00:39:39.072189214Z" level=info msg="RemoveContainer for \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\"" Sep 5 00:39:39.076425 containerd[1597]: time="2025-09-05T00:39:39.076389290Z" level=info msg="RemoveContainer for \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" returns successfully" Sep 5 00:39:39.076533 kubelet[2709]: I0905 00:39:39.076506 2709 scope.go:117] "RemoveContainer" containerID="38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694" Sep 5 00:39:39.077622 containerd[1597]: time="2025-09-05T00:39:39.077597440Z" level=info msg="RemoveContainer for \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\"" Sep 5 00:39:39.081049 containerd[1597]: time="2025-09-05T00:39:39.081024919Z" level=info msg="RemoveContainer for \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" returns successfully" Sep 5 00:39:39.081204 kubelet[2709]: I0905 00:39:39.081152 2709 scope.go:117] "RemoveContainer" containerID="7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910" Sep 5 00:39:39.097404 containerd[1597]: time="2025-09-05T00:39:39.097369161Z" level=info msg="RemoveContainer for \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\"" Sep 5 00:39:39.100726 containerd[1597]: time="2025-09-05T00:39:39.100691008Z" level=info msg="RemoveContainer for \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" returns successfully" Sep 5 00:39:39.100860 kubelet[2709]: I0905 00:39:39.100832 2709 scope.go:117] "RemoveContainer" containerID="7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82" Sep 5 00:39:39.101068 containerd[1597]: time="2025-09-05T00:39:39.101018504Z" level=error msg="ContainerStatus for \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\": not found" Sep 5 00:39:39.101227 kubelet[2709]: E0905 00:39:39.101207 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\": not found" containerID="7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82" Sep 5 00:39:39.101271 kubelet[2709]: I0905 00:39:39.101236 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82"} err="failed to get container status \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cdbdb07893bec7ede6116b19787884f640a59c77f97d7d079dcaae684047c82\": not found" Sep 5 00:39:39.101271 kubelet[2709]: I0905 00:39:39.101257 2709 scope.go:117] "RemoveContainer" containerID="9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7" Sep 5 00:39:39.101442 containerd[1597]: time="2025-09-05T00:39:39.101409261Z" level=error msg="ContainerStatus for \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\": not found" Sep 5 00:39:39.101558 kubelet[2709]: E0905 00:39:39.101537 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\": not found" containerID="9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7" Sep 5 00:39:39.101590 kubelet[2709]: I0905 00:39:39.101561 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7"} err="failed to get container status \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa4234acf350812a089db4323b3b97961a593b32afc481ed3c97a0a8ab2efc7\": not found" Sep 5 00:39:39.101590 kubelet[2709]: I0905 00:39:39.101574 2709 scope.go:117] "RemoveContainer" containerID="3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b" Sep 5 00:39:39.101743 containerd[1597]: time="2025-09-05T00:39:39.101712320Z" level=error msg="ContainerStatus for \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\": not found" Sep 5 00:39:39.101831 kubelet[2709]: E0905 00:39:39.101809 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\": not found" containerID="3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b" Sep 5 00:39:39.101884 kubelet[2709]: I0905 00:39:39.101831 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b"} err="failed to get container status \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3643e50af3b417737fd9d82257fb1a90771dd664cc499dea3fb475c05096ec2b\": not found" Sep 5 00:39:39.101884 kubelet[2709]: I0905 00:39:39.101844 2709 scope.go:117] "RemoveContainer" containerID="38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694" Sep 5 00:39:39.102000 containerd[1597]: time="2025-09-05T00:39:39.101974811Z" level=error msg="ContainerStatus for \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\": not found" Sep 5 00:39:39.102180 kubelet[2709]: E0905 00:39:39.102125 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\": not found" containerID="38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694" Sep 5 00:39:39.102220 kubelet[2709]: I0905 00:39:39.102186 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694"} err="failed to get container status \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\": rpc error: code = NotFound desc = an error occurred when try to find container \"38665839b1ba7ea18c7bf6c90a457bc3c68a3543ae13e8f334724f66fbe03694\": not found" Sep 5 00:39:39.102252 kubelet[2709]: I0905 00:39:39.102220 2709 scope.go:117] "RemoveContainer" containerID="7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910" Sep 5 00:39:39.102417 containerd[1597]: time="2025-09-05T00:39:39.102397660Z" level=error msg="ContainerStatus for \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\": not found" Sep 5 00:39:39.102498 kubelet[2709]: E0905 00:39:39.102478 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\": not found" containerID="7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910" Sep 5 00:39:39.102533 kubelet[2709]: I0905 00:39:39.102498 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910"} err="failed to get container status \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\": rpc error: code = NotFound desc = an error occurred when try to find container \"7461e85b9ac033a2c0c2a9eddcdac5c90532c32f96751a918d8692b9164f3910\": not found" Sep 5 00:39:39.290078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b177c1ff018aeabf8f8cbd1858afc9d9888137f625109338f77c2d7630229cf0-shm.mount: Deactivated successfully. Sep 5 00:39:39.290215 systemd[1]: var-lib-kubelet-pods-2cef6059\x2d07e9\x2d4fff\x2da462\x2d1542bff93f97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds28vx.mount: Deactivated successfully. Sep 5 00:39:39.290295 systemd[1]: var-lib-kubelet-pods-3e9188ab\x2d08b2\x2d4d7b\x2d9ede\x2d6bb7aaeb85e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djn2jk.mount: Deactivated successfully. Sep 5 00:39:39.290370 systemd[1]: var-lib-kubelet-pods-3e9188ab\x2d08b2\x2d4d7b\x2d9ede\x2d6bb7aaeb85e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:39:39.290446 systemd[1]: var-lib-kubelet-pods-3e9188ab\x2d08b2\x2d4d7b\x2d9ede\x2d6bb7aaeb85e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:39:39.325763 kubelet[2709]: I0905 00:39:39.325696 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cef6059-07e9-4fff-a462-1542bff93f97" path="/var/lib/kubelet/pods/2cef6059-07e9-4fff-a462-1542bff93f97/volumes" Sep 5 00:39:39.326368 kubelet[2709]: I0905 00:39:39.326336 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" path="/var/lib/kubelet/pods/3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2/volumes" Sep 5 00:39:40.222373 sshd[4333]: Connection closed by 10.0.0.1 port 36288 Sep 5 00:39:40.222782 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:40.231695 systemd[1]: sshd@24-10.0.0.129:22-10.0.0.1:36288.service: Deactivated successfully. Sep 5 00:39:40.233483 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:39:40.234247 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:39:40.236982 systemd[1]: Started sshd@25-10.0.0.129:22-10.0.0.1:52902.service - OpenSSH per-connection server daemon (10.0.0.1:52902). Sep 5 00:39:40.238010 systemd-logind[1570]: Removed session 25. Sep 5 00:39:40.293687 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 52902 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:40.295097 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:40.299786 systemd-logind[1570]: New session 26 of user core. Sep 5 00:39:40.311314 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:39:40.746100 sshd[4487]: Connection closed by 10.0.0.1 port 52902 Sep 5 00:39:40.746652 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.760935 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="mount-cgroup" Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.760974 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cef6059-07e9-4fff-a462-1542bff93f97" containerName="cilium-operator" Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.760983 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="apply-sysctl-overwrites" Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.760993 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="mount-bpf-fs" Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.761000 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="clean-cilium-state" Sep 5 00:39:40.761875 kubelet[2709]: E0905 00:39:40.761008 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="cilium-agent" Sep 5 00:39:40.761875 kubelet[2709]: I0905 00:39:40.761034 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cef6059-07e9-4fff-a462-1542bff93f97" containerName="cilium-operator" Sep 5 00:39:40.761875 kubelet[2709]: I0905 00:39:40.761042 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9188ab-08b2-4d7b-9ede-6bb7aaeb85e2" containerName="cilium-agent" Sep 5 00:39:40.764781 systemd[1]: sshd@25-10.0.0.129:22-10.0.0.1:52902.service: Deactivated successfully. Sep 5 00:39:40.774716 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:39:40.779499 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:39:40.784551 systemd[1]: Started sshd@26-10.0.0.129:22-10.0.0.1:52906.service - OpenSSH per-connection server daemon (10.0.0.1:52906). Sep 5 00:39:40.789521 systemd-logind[1570]: Removed session 26. Sep 5 00:39:40.793355 kubelet[2709]: I0905 00:39:40.793321 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-cni-path\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.794450 kubelet[2709]: I0905 00:39:40.794373 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-xtables-lock\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796692 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-host-proc-sys-kernel\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796727 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-host-proc-sys-net\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796745 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abe8b40f-53f7-4538-a7b0-ddbb803588b4-clustermesh-secrets\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796761 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abe8b40f-53f7-4538-a7b0-ddbb803588b4-hubble-tls\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796775 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abe8b40f-53f7-4538-a7b0-ddbb803588b4-cilium-ipsec-secrets\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.796950 kubelet[2709]: I0905 00:39:40.796788 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-hostproc\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796802 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-lib-modules\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796840 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abe8b40f-53f7-4538-a7b0-ddbb803588b4-cilium-config-path\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796857 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptb8\" (UniqueName: \"kubernetes.io/projected/abe8b40f-53f7-4538-a7b0-ddbb803588b4-kube-api-access-nptb8\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796875 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-bpf-maps\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796889 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-cilium-cgroup\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797303 kubelet[2709]: I0905 00:39:40.796910 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-cilium-run\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.797497 kubelet[2709]: I0905 00:39:40.796924 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abe8b40f-53f7-4538-a7b0-ddbb803588b4-etc-cni-netd\") pod \"cilium-2bp2c\" (UID: \"abe8b40f-53f7-4538-a7b0-ddbb803588b4\") " pod="kube-system/cilium-2bp2c" Sep 5 00:39:40.801803 systemd[1]: Created slice kubepods-burstable-podabe8b40f_53f7_4538_a7b0_ddbb803588b4.slice - libcontainer container kubepods-burstable-podabe8b40f_53f7_4538_a7b0_ddbb803588b4.slice. Sep 5 00:39:40.840648 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 52906 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:40.841982 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:40.846447 systemd-logind[1570]: New session 27 of user core. Sep 5 00:39:40.863328 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:39:40.914968 sshd[4501]: Connection closed by 10.0.0.1 port 52906 Sep 5 00:39:40.915371 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:40.929799 systemd[1]: sshd@26-10.0.0.129:22-10.0.0.1:52906.service: Deactivated successfully. Sep 5 00:39:40.931651 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:39:40.932512 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:39:40.935656 systemd[1]: Started sshd@27-10.0.0.129:22-10.0.0.1:52916.service - OpenSSH per-connection server daemon (10.0.0.1:52916). Sep 5 00:39:40.936258 systemd-logind[1570]: Removed session 27. Sep 5 00:39:40.988985 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 52916 ssh2: RSA SHA256:QZjCu0nLhW2aUv/7CP56BUUr724Tkwn5NiHioL7y6XE Sep 5 00:39:40.990323 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:40.994902 systemd-logind[1570]: New session 28 of user core. Sep 5 00:39:41.006299 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:39:41.106936 kubelet[2709]: E0905 00:39:41.106889 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:41.108737 containerd[1597]: time="2025-09-05T00:39:41.108695303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bp2c,Uid:abe8b40f-53f7-4538-a7b0-ddbb803588b4,Namespace:kube-system,Attempt:0,}" Sep 5 00:39:41.125894 containerd[1597]: time="2025-09-05T00:39:41.125842198Z" level=info msg="connecting to shim 098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:39:41.151346 systemd[1]: Started cri-containerd-098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5.scope - libcontainer container 098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5. Sep 5 00:39:41.178082 containerd[1597]: time="2025-09-05T00:39:41.178034728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bp2c,Uid:abe8b40f-53f7-4538-a7b0-ddbb803588b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\"" Sep 5 00:39:41.178799 kubelet[2709]: E0905 00:39:41.178775 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:41.180951 containerd[1597]: time="2025-09-05T00:39:41.180552374Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:39:41.192883 containerd[1597]: time="2025-09-05T00:39:41.192842878Z" level=info msg="Container a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:41.199887 containerd[1597]: time="2025-09-05T00:39:41.199846479Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\"" Sep 5 00:39:41.200310 containerd[1597]: time="2025-09-05T00:39:41.200289104Z" level=info msg="StartContainer for \"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\"" Sep 5 00:39:41.201076 containerd[1597]: time="2025-09-05T00:39:41.201052161Z" level=info msg="connecting to shim a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" protocol=ttrpc version=3 Sep 5 00:39:41.224305 systemd[1]: Started cri-containerd-a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568.scope - libcontainer container a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568. Sep 5 00:39:41.253431 containerd[1597]: time="2025-09-05T00:39:41.253385700Z" level=info msg="StartContainer for \"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\" returns successfully" Sep 5 00:39:41.261570 systemd[1]: cri-containerd-a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568.scope: Deactivated successfully. Sep 5 00:39:41.263290 containerd[1597]: time="2025-09-05T00:39:41.263234413Z" level=info msg="received exit event container_id:\"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\" id:\"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\" pid:4581 exited_at:{seconds:1757032781 nanos:262907689}" Sep 5 00:39:41.263608 containerd[1597]: time="2025-09-05T00:39:41.263579882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\" id:\"a663869f0a38e4bfd50335c7a3ea7716d87ef40ec7febb00574dfa2e45ee2568\" pid:4581 exited_at:{seconds:1757032781 nanos:262907689}" Sep 5 00:39:41.754836 kubelet[2709]: E0905 00:39:41.754796 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:41.757081 containerd[1597]: time="2025-09-05T00:39:41.757016108Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:39:41.764937 containerd[1597]: time="2025-09-05T00:39:41.764871606Z" level=info msg="Container 790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:41.772475 containerd[1597]: time="2025-09-05T00:39:41.772434123Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\"" Sep 5 00:39:41.772913 containerd[1597]: time="2025-09-05T00:39:41.772882870Z" level=info msg="StartContainer for \"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\"" Sep 5 00:39:41.773709 containerd[1597]: time="2025-09-05T00:39:41.773684601Z" level=info msg="connecting to shim 790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" protocol=ttrpc version=3 Sep 5 00:39:41.798342 systemd[1]: Started cri-containerd-790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c.scope - libcontainer container 790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c. Sep 5 00:39:41.828030 containerd[1597]: time="2025-09-05T00:39:41.827964316Z" level=info msg="StartContainer for \"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\" returns successfully" Sep 5 00:39:41.834772 systemd[1]: cri-containerd-790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c.scope: Deactivated successfully. Sep 5 00:39:41.835335 containerd[1597]: time="2025-09-05T00:39:41.835288218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\" id:\"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\" pid:4627 exited_at:{seconds:1757032781 nanos:834948799}" Sep 5 00:39:41.835427 containerd[1597]: time="2025-09-05T00:39:41.835388228Z" level=info msg="received exit event container_id:\"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\" id:\"790bbfcae3f16841192f939a505f33899a18e4e02d342a7c9841486ba1cf445c\" pid:4627 exited_at:{seconds:1757032781 nanos:834948799}" Sep 5 00:39:42.758877 kubelet[2709]: E0905 00:39:42.758843 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:42.761052 containerd[1597]: time="2025-09-05T00:39:42.761009378Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:39:42.784902 containerd[1597]: time="2025-09-05T00:39:42.784841125Z" level=info msg="Container 265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:42.958829 containerd[1597]: time="2025-09-05T00:39:42.958786624Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\"" Sep 5 00:39:42.959327 containerd[1597]: time="2025-09-05T00:39:42.959300615Z" level=info msg="StartContainer for \"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\"" Sep 5 00:39:42.960803 containerd[1597]: time="2025-09-05T00:39:42.960770029Z" level=info msg="connecting to shim 265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" protocol=ttrpc version=3 Sep 5 00:39:42.985323 systemd[1]: Started cri-containerd-265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd.scope - libcontainer container 265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd. Sep 5 00:39:43.186711 systemd[1]: cri-containerd-265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd.scope: Deactivated successfully. Sep 5 00:39:43.188740 containerd[1597]: time="2025-09-05T00:39:43.188706320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\" id:\"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\" pid:4670 exited_at:{seconds:1757032783 nanos:188374737}" Sep 5 00:39:43.301800 containerd[1597]: time="2025-09-05T00:39:43.301736434Z" level=info msg="received exit event container_id:\"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\" id:\"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\" pid:4670 exited_at:{seconds:1757032783 nanos:188374737}" Sep 5 00:39:43.310465 containerd[1597]: time="2025-09-05T00:39:43.310418635Z" level=info msg="StartContainer for \"265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd\" returns successfully" Sep 5 00:39:43.323424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265c7a8bfc1f56aae544d836ed5aae92dc5f6b63895e01252ebc7ec6337d54fd-rootfs.mount: Deactivated successfully. Sep 5 00:39:43.424724 kubelet[2709]: E0905 00:39:43.424675 2709 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:39:43.763746 kubelet[2709]: E0905 00:39:43.763704 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:43.767671 containerd[1597]: time="2025-09-05T00:39:43.767224740Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:39:43.818195 containerd[1597]: time="2025-09-05T00:39:43.818107024Z" level=info msg="Container 5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:43.826455 containerd[1597]: time="2025-09-05T00:39:43.826397757Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\"" Sep 5 00:39:43.827112 containerd[1597]: time="2025-09-05T00:39:43.827059029Z" level=info msg="StartContainer for \"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\"" Sep 5 00:39:43.828107 containerd[1597]: time="2025-09-05T00:39:43.828071740Z" level=info msg="connecting to shim 5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" protocol=ttrpc version=3 Sep 5 00:39:43.849458 systemd[1]: Started cri-containerd-5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5.scope - libcontainer container 5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5. Sep 5 00:39:43.878137 systemd[1]: cri-containerd-5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5.scope: Deactivated successfully. Sep 5 00:39:43.879326 containerd[1597]: time="2025-09-05T00:39:43.879280757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\" id:\"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\" pid:4709 exited_at:{seconds:1757032783 nanos:878772858}" Sep 5 00:39:43.879740 containerd[1597]: time="2025-09-05T00:39:43.879700608Z" level=info msg="received exit event container_id:\"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\" id:\"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\" pid:4709 exited_at:{seconds:1757032783 nanos:878772858}" Sep 5 00:39:43.888234 containerd[1597]: time="2025-09-05T00:39:43.888145856Z" level=info msg="StartContainer for \"5d6831094b78748201eb42993b6035aa7a8b2e1f194faea87f7601287691a3e5\" returns successfully" Sep 5 00:39:44.771655 kubelet[2709]: E0905 00:39:44.770607 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:44.775843 containerd[1597]: time="2025-09-05T00:39:44.775792237Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:39:44.804554 kubelet[2709]: I0905 00:39:44.804496 2709 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:39:44Z","lastTransitionTime":"2025-09-05T00:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:39:44.818421 containerd[1597]: time="2025-09-05T00:39:44.818344767Z" level=info msg="Container f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:44.835537 containerd[1597]: time="2025-09-05T00:39:44.835480238Z" level=info msg="CreateContainer within sandbox \"098f579e8880860a5bc8536b4a6b58aa5ccece6bcb580457c1ac1279f9cf1cc5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\"" Sep 5 00:39:44.836148 containerd[1597]: time="2025-09-05T00:39:44.836089570Z" level=info msg="StartContainer for \"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\"" Sep 5 00:39:44.837407 containerd[1597]: time="2025-09-05T00:39:44.837377135Z" level=info msg="connecting to shim f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130" address="unix:///run/containerd/s/7078dfe523b1097448ee454944b25043aca4f301d77dbf33ba46b99c8089bddf" protocol=ttrpc version=3 Sep 5 00:39:44.862303 systemd[1]: Started cri-containerd-f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130.scope - libcontainer container f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130. Sep 5 00:39:44.898823 containerd[1597]: time="2025-09-05T00:39:44.898774706Z" level=info msg="StartContainer for \"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" returns successfully" Sep 5 00:39:44.971312 containerd[1597]: time="2025-09-05T00:39:44.971248284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"4a3204714ea907e21b4280a9ef51468c3533ba024d03e8feed8e982f999d91db\" pid:4776 exited_at:{seconds:1757032784 nanos:970844364}" Sep 5 00:39:45.320250 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 5 00:39:45.322962 kubelet[2709]: E0905 00:39:45.322922 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:45.776634 kubelet[2709]: E0905 00:39:45.776600 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:45.791086 kubelet[2709]: I0905 00:39:45.790853 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2bp2c" podStartSLOduration=5.790834274 podStartE2EDuration="5.790834274s" podCreationTimestamp="2025-09-05 00:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:39:45.790539433 +0000 UTC m=+92.621133901" watchObservedRunningTime="2025-09-05 00:39:45.790834274 +0000 UTC m=+92.621428732" Sep 5 00:39:47.108265 kubelet[2709]: E0905 00:39:47.108194 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:47.687595 containerd[1597]: time="2025-09-05T00:39:47.687545878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"ec901d0c52e71d90db6fab7cb3c80bab1bac49531d4d81ebe312f1dd054110bc\" pid:5052 exit_status:1 exited_at:{seconds:1757032787 nanos:686752487}" Sep 5 00:39:48.323287 kubelet[2709]: E0905 00:39:48.323217 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tccm9" podUID="d6830763-afee-48e0-a08c-92e0c494cdfb" Sep 5 00:39:48.661998 systemd-networkd[1486]: lxc_health: Link UP Sep 5 00:39:48.664688 systemd-networkd[1486]: lxc_health: Gained carrier Sep 5 00:39:49.110187 kubelet[2709]: E0905 00:39:49.109261 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:49.787104 kubelet[2709]: E0905 00:39:49.787047 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:49.798816 containerd[1597]: time="2025-09-05T00:39:49.798744635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"7a6cae4a2b4938660258126c04d5d1d6a5749c7512d95374b38a82e37ec10916\" pid:5315 exited_at:{seconds:1757032789 nanos:798310388}" Sep 5 00:39:50.323515 kubelet[2709]: E0905 00:39:50.323458 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:50.636249 systemd-networkd[1486]: lxc_health: Gained IPv6LL Sep 5 00:39:50.789051 kubelet[2709]: E0905 00:39:50.789006 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:51.919042 containerd[1597]: time="2025-09-05T00:39:51.918990810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"b4ca6053e699dff2957c4ccf20196d42470f514fb28196e8583cf8b86d410f08\" pid:5343 exited_at:{seconds:1757032791 nanos:918661464}" Sep 5 00:39:51.924553 kubelet[2709]: E0905 00:39:51.924525 2709 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60778->127.0.0.1:46453: write tcp 127.0.0.1:60778->127.0.0.1:46453: write: broken pipe Sep 5 00:39:54.009435 containerd[1597]: time="2025-09-05T00:39:54.009374072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"7f21425cdbc028925c176958a838cfc952e4f2f1a5a9dc7cf281caf5523c7f76\" pid:5375 exited_at:{seconds:1757032794 nanos:8985403}" Sep 5 00:39:56.104703 containerd[1597]: time="2025-09-05T00:39:56.104655142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"e30bee65830a12fd0068ab1361253ba33f5bb3e306de49b65f2df9fab7619cb1\" pid:5399 exited_at:{seconds:1757032796 nanos:104351005}" Sep 5 00:39:58.207565 containerd[1597]: time="2025-09-05T00:39:58.207507146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ad02d9fb346f2b9f129af40aa79bb1ee5084ce79b6a72eae4d756258a33130\" id:\"21f77c0ccd39b8f1bcfa803f9549e79d57a34335e5600cfb6b8cd0f0e490a050\" pid:5423 exited_at:{seconds:1757032798 nanos:206884504}" Sep 5 00:39:58.213991 sshd[4515]: Connection closed by 10.0.0.1 port 52916 Sep 5 00:39:58.214463 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:58.219104 systemd[1]: sshd@27-10.0.0.129:22-10.0.0.1:52916.service: Deactivated successfully. Sep 5 00:39:58.221324 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:39:58.222104 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:39:58.223556 systemd-logind[1570]: Removed session 28.