Sep 10 05:22:21.805235 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 03:32:41 -00 2025 Sep 10 05:22:21.805257 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:22:21.805268 kernel: BIOS-provided physical RAM map: Sep 10 05:22:21.805275 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 05:22:21.805281 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 05:22:21.805292 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 05:22:21.805300 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 05:22:21.805306 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 05:22:21.805313 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 10 05:22:21.805322 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 10 05:22:21.805329 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 10 05:22:21.805335 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 10 05:22:21.805342 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 10 05:22:21.805350 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 10 05:22:21.805361 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 10 05:22:21.805372 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 05:22:21.805465 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 10 05:22:21.805472 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 10 05:22:21.805481 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 10 05:22:21.805490 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 10 05:22:21.805497 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 10 05:22:21.805504 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 05:22:21.805511 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 10 05:22:21.805517 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 05:22:21.805524 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 10 05:22:21.805534 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 05:22:21.805541 kernel: NX (Execute Disable) protection: active Sep 10 05:22:21.805548 kernel: APIC: Static calls initialized Sep 10 05:22:21.805555 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 10 05:22:21.805562 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 10 05:22:21.805569 kernel: extended physical RAM map: Sep 10 05:22:21.805576 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 05:22:21.805583 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 05:22:21.805590 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 05:22:21.805597 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 05:22:21.805604 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 05:22:21.805613 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 10 05:22:21.805620 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 10 05:22:21.805626 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 10 05:22:21.805634 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 10 05:22:21.805644 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 10 05:22:21.805651 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 10 05:22:21.805660 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 10 05:22:21.805667 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 10 05:22:21.805674 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 10 05:22:21.805682 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 10 05:22:21.805689 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 10 05:22:21.805696 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 05:22:21.805703 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 10 05:22:21.805711 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 10 05:22:21.805718 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 10 05:22:21.805725 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 10 05:22:21.805735 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 10 05:22:21.805742 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 05:22:21.805749 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 10 05:22:21.805756 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 05:22:21.805764 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 10 05:22:21.805771 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 05:22:21.805786 kernel: efi: EFI v2.7 by EDK II Sep 10 05:22:21.805794 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 10 05:22:21.805803 kernel: random: crng init done Sep 10 05:22:21.805812 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 10 05:22:21.805820 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 10 05:22:21.805829 kernel: secureboot: Secure boot disabled Sep 10 05:22:21.805836 kernel: SMBIOS 2.8 present. Sep 10 05:22:21.805844 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 10 05:22:21.805851 kernel: DMI: Memory slots populated: 1/1 Sep 10 05:22:21.805858 kernel: Hypervisor detected: KVM Sep 10 05:22:21.805865 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 05:22:21.805873 kernel: kvm-clock: using sched offset of 3571104479 cycles Sep 10 05:22:21.805882 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 05:22:21.805892 kernel: tsc: Detected 2794.748 MHz processor Sep 10 05:22:21.805899 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 05:22:21.805907 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 05:22:21.805916 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 10 05:22:21.805924 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 10 05:22:21.805931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 05:22:21.805939 kernel: Using GB pages for direct mapping Sep 10 05:22:21.805946 kernel: ACPI: Early table checksum verification disabled Sep 10 05:22:21.805954 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 05:22:21.805961 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 05:22:21.805969 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.805977 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.805986 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 05:22:21.805993 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.806001 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.806008 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.806015 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:22:21.806023 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 05:22:21.806030 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 05:22:21.806038 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 05:22:21.806045 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 05:22:21.806055 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 05:22:21.806062 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 05:22:21.806069 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 05:22:21.806085 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 05:22:21.806093 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 05:22:21.806101 kernel: No NUMA configuration found Sep 10 05:22:21.806116 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 10 05:22:21.806124 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 10 05:22:21.806131 kernel: Zone ranges: Sep 10 05:22:21.806141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 05:22:21.806148 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 10 05:22:21.806156 kernel: Normal empty Sep 10 05:22:21.806167 kernel: Device empty Sep 10 05:22:21.806174 kernel: Movable zone start for each node Sep 10 05:22:21.806183 kernel: Early memory node ranges Sep 10 05:22:21.806193 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 05:22:21.806200 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 05:22:21.806207 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 05:22:21.806215 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 10 05:22:21.806224 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 10 05:22:21.806232 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 10 05:22:21.806239 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 10 05:22:21.806246 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 10 05:22:21.806253 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 10 05:22:21.806262 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 05:22:21.806272 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 05:22:21.806288 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 05:22:21.806296 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 05:22:21.806304 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 10 05:22:21.806311 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 10 05:22:21.806319 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 10 05:22:21.806329 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 10 05:22:21.806336 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 10 05:22:21.806344 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 05:22:21.806352 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 05:22:21.806360 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 05:22:21.806369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 05:22:21.806390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 05:22:21.806398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 05:22:21.806405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 05:22:21.806413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 05:22:21.806421 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 05:22:21.806428 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 05:22:21.806436 kernel: TSC deadline timer available Sep 10 05:22:21.806444 kernel: CPU topo: Max. logical packages: 1 Sep 10 05:22:21.806454 kernel: CPU topo: Max. logical dies: 1 Sep 10 05:22:21.806462 kernel: CPU topo: Max. dies per package: 1 Sep 10 05:22:21.806469 kernel: CPU topo: Max. threads per core: 1 Sep 10 05:22:21.806477 kernel: CPU topo: Num. cores per package: 4 Sep 10 05:22:21.806484 kernel: CPU topo: Num. threads per package: 4 Sep 10 05:22:21.806492 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 10 05:22:21.806500 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 05:22:21.806508 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 05:22:21.806515 kernel: kvm-guest: setup PV sched yield Sep 10 05:22:21.806525 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 10 05:22:21.806533 kernel: Booting paravirtualized kernel on KVM Sep 10 05:22:21.806541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 05:22:21.806549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 05:22:21.806558 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 10 05:22:21.806568 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 10 05:22:21.806576 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 05:22:21.806583 kernel: kvm-guest: PV spinlocks enabled Sep 10 05:22:21.806591 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 05:22:21.806602 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:22:21.806611 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 05:22:21.806618 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 05:22:21.806626 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 05:22:21.806635 kernel: Fallback order for Node 0: 0 Sep 10 05:22:21.806645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 10 05:22:21.806653 kernel: Policy zone: DMA32 Sep 10 05:22:21.806661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 05:22:21.806670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 05:22:21.806678 kernel: ftrace: allocating 40102 entries in 157 pages Sep 10 05:22:21.806686 kernel: ftrace: allocated 157 pages with 5 groups Sep 10 05:22:21.806694 kernel: Dynamic Preempt: voluntary Sep 10 05:22:21.806701 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 05:22:21.806710 kernel: rcu: RCU event tracing is enabled. Sep 10 05:22:21.806717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 05:22:21.806725 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 05:22:21.806733 kernel: Rude variant of Tasks RCU enabled. Sep 10 05:22:21.806741 kernel: Tracing variant of Tasks RCU enabled. Sep 10 05:22:21.806751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 05:22:21.806759 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 05:22:21.806767 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:22:21.806781 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:22:21.806789 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:22:21.806798 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 05:22:21.806806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 05:22:21.806814 kernel: Console: colour dummy device 80x25 Sep 10 05:22:21.806821 kernel: printk: legacy console [ttyS0] enabled Sep 10 05:22:21.806831 kernel: ACPI: Core revision 20240827 Sep 10 05:22:21.806839 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 05:22:21.806847 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 05:22:21.806854 kernel: x2apic enabled Sep 10 05:22:21.806862 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 05:22:21.806870 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 05:22:21.806878 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 05:22:21.806885 kernel: kvm-guest: setup PV IPIs Sep 10 05:22:21.806893 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 05:22:21.806903 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 10 05:22:21.806911 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 05:22:21.806919 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 05:22:21.806926 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 05:22:21.806934 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 05:22:21.806942 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 05:22:21.806951 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 05:22:21.806961 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 05:22:21.806970 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 05:22:21.806979 kernel: active return thunk: retbleed_return_thunk Sep 10 05:22:21.806986 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 05:22:21.806994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 05:22:21.807002 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 05:22:21.807010 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 05:22:21.807018 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 05:22:21.807027 kernel: active return thunk: srso_return_thunk Sep 10 05:22:21.807037 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 05:22:21.807047 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 05:22:21.807055 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 05:22:21.807063 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 05:22:21.807071 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 05:22:21.807078 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 05:22:21.807086 kernel: Freeing SMP alternatives memory: 32K Sep 10 05:22:21.807094 kernel: pid_max: default: 32768 minimum: 301 Sep 10 05:22:21.807103 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 05:22:21.807112 kernel: landlock: Up and running. Sep 10 05:22:21.807123 kernel: SELinux: Initializing. Sep 10 05:22:21.807131 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 05:22:21.807139 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 05:22:21.807147 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 05:22:21.807155 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 05:22:21.807162 kernel: ... version: 0 Sep 10 05:22:21.807170 kernel: ... bit width: 48 Sep 10 05:22:21.807178 kernel: ... generic registers: 6 Sep 10 05:22:21.807185 kernel: ... value mask: 0000ffffffffffff Sep 10 05:22:21.807195 kernel: ... max period: 00007fffffffffff Sep 10 05:22:21.807203 kernel: ... fixed-purpose events: 0 Sep 10 05:22:21.807210 kernel: ... event mask: 000000000000003f Sep 10 05:22:21.807218 kernel: signal: max sigframe size: 1776 Sep 10 05:22:21.807225 kernel: rcu: Hierarchical SRCU implementation. Sep 10 05:22:21.807233 kernel: rcu: Max phase no-delay instances is 400. Sep 10 05:22:21.807241 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 05:22:21.807249 kernel: smp: Bringing up secondary CPUs ... Sep 10 05:22:21.807257 kernel: smpboot: x86: Booting SMP configuration: Sep 10 05:22:21.807266 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 05:22:21.807274 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 05:22:21.807282 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 05:22:21.807290 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54068K init, 2900K bss, 137196K reserved, 0K cma-reserved) Sep 10 05:22:21.807297 kernel: devtmpfs: initialized Sep 10 05:22:21.807307 kernel: x86/mm: Memory block size: 128MB Sep 10 05:22:21.807316 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 05:22:21.807324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 05:22:21.807332 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 10 05:22:21.807342 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 05:22:21.807350 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 10 05:22:21.807357 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 05:22:21.807365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 05:22:21.807373 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 05:22:21.807394 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 05:22:21.807402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 05:22:21.807410 kernel: audit: initializing netlink subsys (disabled) Sep 10 05:22:21.807418 kernel: audit: type=2000 audit(1757481739.453:1): state=initialized audit_enabled=0 res=1 Sep 10 05:22:21.807428 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 05:22:21.807435 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 05:22:21.807443 kernel: cpuidle: using governor menu Sep 10 05:22:21.807451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 05:22:21.807458 kernel: dca service started, version 1.12.1 Sep 10 05:22:21.807466 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 10 05:22:21.807474 kernel: PCI: Using configuration type 1 for base access Sep 10 05:22:21.807481 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 05:22:21.807491 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 05:22:21.807498 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 05:22:21.807506 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 05:22:21.807514 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 05:22:21.807521 kernel: ACPI: Added _OSI(Module Device) Sep 10 05:22:21.807529 kernel: ACPI: Added _OSI(Processor Device) Sep 10 05:22:21.807536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 05:22:21.807544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 05:22:21.807551 kernel: ACPI: Interpreter enabled Sep 10 05:22:21.807561 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 05:22:21.807569 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 05:22:21.807576 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 05:22:21.807584 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 05:22:21.807592 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 05:22:21.807599 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 05:22:21.807787 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 05:22:21.807909 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 05:22:21.808027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 05:22:21.808037 kernel: PCI host bridge to bus 0000:00 Sep 10 05:22:21.808170 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 05:22:21.808276 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 05:22:21.808451 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 05:22:21.808587 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 10 05:22:21.808691 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 10 05:22:21.808813 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 10 05:22:21.808922 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 05:22:21.809056 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 10 05:22:21.809188 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 10 05:22:21.809308 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 10 05:22:21.809440 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 10 05:22:21.809567 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 10 05:22:21.809693 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 05:22:21.809855 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 05:22:21.810002 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 10 05:22:21.810142 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 10 05:22:21.810266 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 10 05:22:21.810417 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 10 05:22:21.810552 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 10 05:22:21.810675 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 10 05:22:21.810801 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 10 05:22:21.810929 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 10 05:22:21.811063 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 10 05:22:21.811180 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 10 05:22:21.811293 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 10 05:22:21.811443 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 10 05:22:21.811565 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 10 05:22:21.811678 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 05:22:21.811817 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 10 05:22:21.811933 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 10 05:22:21.812052 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 10 05:22:21.812194 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 10 05:22:21.812315 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 10 05:22:21.812326 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 05:22:21.812334 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 05:22:21.812342 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 05:22:21.812350 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 05:22:21.812357 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 05:22:21.812365 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 05:22:21.812373 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 05:22:21.812414 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 05:22:21.812421 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 05:22:21.812429 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 05:22:21.812437 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 05:22:21.812445 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 05:22:21.812452 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 05:22:21.812460 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 05:22:21.812468 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 05:22:21.812476 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 05:22:21.812486 kernel: iommu: Default domain type: Translated Sep 10 05:22:21.812494 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 05:22:21.812502 kernel: efivars: Registered efivars operations Sep 10 05:22:21.812509 kernel: PCI: Using ACPI for IRQ routing Sep 10 05:22:21.812517 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 05:22:21.812525 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 05:22:21.812532 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 10 05:22:21.812540 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 10 05:22:21.812548 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 10 05:22:21.812558 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 10 05:22:21.812565 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 10 05:22:21.812573 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 10 05:22:21.812581 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 10 05:22:21.812699 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 05:22:21.812824 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 05:22:21.812936 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 05:22:21.812946 kernel: vgaarb: loaded Sep 10 05:22:21.812958 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 05:22:21.812966 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 05:22:21.812973 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 05:22:21.812981 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 05:22:21.812989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 05:22:21.812997 kernel: pnp: PnP ACPI init Sep 10 05:22:21.813132 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 10 05:22:21.813146 kernel: pnp: PnP ACPI: found 6 devices Sep 10 05:22:21.813156 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 05:22:21.813165 kernel: NET: Registered PF_INET protocol family Sep 10 05:22:21.813173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 05:22:21.813181 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 05:22:21.813189 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 05:22:21.813198 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 05:22:21.813206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 05:22:21.813214 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 05:22:21.813222 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 05:22:21.813232 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 05:22:21.813240 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 05:22:21.813248 kernel: NET: Registered PF_XDP protocol family Sep 10 05:22:21.813365 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 10 05:22:21.813495 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 10 05:22:21.813603 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 05:22:21.813708 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 05:22:21.813823 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 05:22:21.813934 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 10 05:22:21.814043 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 10 05:22:21.814147 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 10 05:22:21.814158 kernel: PCI: CLS 0 bytes, default 64 Sep 10 05:22:21.814166 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 10 05:22:21.814175 kernel: Initialise system trusted keyrings Sep 10 05:22:21.814185 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 05:22:21.814193 kernel: Key type asymmetric registered Sep 10 05:22:21.814201 kernel: Asymmetric key parser 'x509' registered Sep 10 05:22:21.814210 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 05:22:21.814218 kernel: io scheduler mq-deadline registered Sep 10 05:22:21.814226 kernel: io scheduler kyber registered Sep 10 05:22:21.814234 kernel: io scheduler bfq registered Sep 10 05:22:21.814242 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 05:22:21.814252 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 05:22:21.814261 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 05:22:21.814269 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 05:22:21.814277 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 05:22:21.814285 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 05:22:21.814293 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 05:22:21.814301 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 05:22:21.814309 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 05:22:21.814445 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 05:22:21.814561 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 05:22:21.814572 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 05:22:21.814678 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T05:22:21 UTC (1757481741) Sep 10 05:22:21.814794 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 10 05:22:21.814806 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 05:22:21.814814 kernel: efifb: probing for efifb Sep 10 05:22:21.814822 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 10 05:22:21.814830 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 10 05:22:21.814841 kernel: efifb: scrolling: redraw Sep 10 05:22:21.814849 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 10 05:22:21.814857 kernel: Console: switching to colour frame buffer device 160x50 Sep 10 05:22:21.814866 kernel: fb0: EFI VGA frame buffer device Sep 10 05:22:21.814874 kernel: pstore: Using crash dump compression: deflate Sep 10 05:22:21.814882 kernel: pstore: Registered efi_pstore as persistent store backend Sep 10 05:22:21.814890 kernel: NET: Registered PF_INET6 protocol family Sep 10 05:22:21.814898 kernel: Segment Routing with IPv6 Sep 10 05:22:21.814906 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 05:22:21.814916 kernel: NET: Registered PF_PACKET protocol family Sep 10 05:22:21.814924 kernel: Key type dns_resolver registered Sep 10 05:22:21.814932 kernel: IPI shorthand broadcast: enabled Sep 10 05:22:21.814940 kernel: sched_clock: Marking stable (2967002434, 152295170)->(3136643867, -17346263) Sep 10 05:22:21.814947 kernel: registered taskstats version 1 Sep 10 05:22:21.814955 kernel: Loading compiled-in X.509 certificates Sep 10 05:22:21.814963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f6c45bc801b894d4dac30a723f1f683ea8f7e3ae' Sep 10 05:22:21.814971 kernel: Demotion targets for Node 0: null Sep 10 05:22:21.814979 kernel: Key type .fscrypt registered Sep 10 05:22:21.814989 kernel: Key type fscrypt-provisioning registered Sep 10 05:22:21.814997 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 05:22:21.815004 kernel: ima: Allocated hash algorithm: sha1 Sep 10 05:22:21.815012 kernel: ima: No architecture policies found Sep 10 05:22:21.815020 kernel: clk: Disabling unused clocks Sep 10 05:22:21.815028 kernel: Warning: unable to open an initial console. Sep 10 05:22:21.815036 kernel: Freeing unused kernel image (initmem) memory: 54068K Sep 10 05:22:21.815044 kernel: Write protecting the kernel read-only data: 24576k Sep 10 05:22:21.815054 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 10 05:22:21.815062 kernel: Run /init as init process Sep 10 05:22:21.815070 kernel: with arguments: Sep 10 05:22:21.815078 kernel: /init Sep 10 05:22:21.815086 kernel: with environment: Sep 10 05:22:21.815093 kernel: HOME=/ Sep 10 05:22:21.815101 kernel: TERM=linux Sep 10 05:22:21.815109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 05:22:21.815118 systemd[1]: Successfully made /usr/ read-only. Sep 10 05:22:21.815131 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 05:22:21.815141 systemd[1]: Detected virtualization kvm. Sep 10 05:22:21.815149 systemd[1]: Detected architecture x86-64. Sep 10 05:22:21.815157 systemd[1]: Running in initrd. Sep 10 05:22:21.815166 systemd[1]: No hostname configured, using default hostname. Sep 10 05:22:21.815174 systemd[1]: Hostname set to . Sep 10 05:22:21.815183 systemd[1]: Initializing machine ID from VM UUID. Sep 10 05:22:21.815191 systemd[1]: Queued start job for default target initrd.target. Sep 10 05:22:21.815201 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:22:21.815210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:22:21.815219 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 05:22:21.815228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 05:22:21.815237 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 05:22:21.815246 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 05:22:21.815258 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 05:22:21.815267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 05:22:21.815276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:22:21.815285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:22:21.815294 systemd[1]: Reached target paths.target - Path Units. Sep 10 05:22:21.815303 systemd[1]: Reached target slices.target - Slice Units. Sep 10 05:22:21.815311 systemd[1]: Reached target swap.target - Swaps. Sep 10 05:22:21.815320 systemd[1]: Reached target timers.target - Timer Units. Sep 10 05:22:21.815329 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 05:22:21.815340 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 05:22:21.815349 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 05:22:21.815358 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 05:22:21.815367 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:22:21.815376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 05:22:21.815402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:22:21.815410 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 05:22:21.815419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 05:22:21.815430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 05:22:21.815438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 05:22:21.815448 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 05:22:21.815456 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 05:22:21.815465 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 05:22:21.815473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 05:22:21.815482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:22:21.815491 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 05:22:21.815502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:22:21.815510 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 05:22:21.815519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 05:22:21.815548 systemd-journald[219]: Collecting audit messages is disabled. Sep 10 05:22:21.815570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:21.815579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 05:22:21.815588 systemd-journald[219]: Journal started Sep 10 05:22:21.815613 systemd-journald[219]: Runtime Journal (/run/log/journal/86dcebe651c9425c8673052611a3b0dd) is 6M, max 48.4M, 42.4M free. Sep 10 05:22:21.803923 systemd-modules-load[221]: Inserted module 'overlay' Sep 10 05:22:21.820193 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 05:22:21.830410 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 05:22:21.832548 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 05:22:21.834146 kernel: Bridge firewalling registered Sep 10 05:22:21.832786 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 10 05:22:21.836503 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 05:22:21.840486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:22:21.841173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 05:22:21.843294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 05:22:21.852869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 05:22:21.855844 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 05:22:21.863482 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 05:22:21.872510 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:22:21.874081 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:22:21.876391 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:22:21.880033 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 05:22:21.888280 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:22:21.927927 systemd-resolved[269]: Positive Trust Anchors: Sep 10 05:22:21.927941 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 05:22:21.927970 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 05:22:21.930474 systemd-resolved[269]: Defaulting to hostname 'linux'. Sep 10 05:22:21.931486 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 05:22:21.937536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:22:21.994419 kernel: SCSI subsystem initialized Sep 10 05:22:22.004412 kernel: Loading iSCSI transport class v2.0-870. Sep 10 05:22:22.014415 kernel: iscsi: registered transport (tcp) Sep 10 05:22:22.036415 kernel: iscsi: registered transport (qla4xxx) Sep 10 05:22:22.036448 kernel: QLogic iSCSI HBA Driver Sep 10 05:22:22.057107 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 05:22:22.073758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:22:22.074826 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 05:22:22.131078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 05:22:22.133453 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 05:22:22.196403 kernel: raid6: avx2x4 gen() 28333 MB/s Sep 10 05:22:22.213407 kernel: raid6: avx2x2 gen() 28873 MB/s Sep 10 05:22:22.230492 kernel: raid6: avx2x1 gen() 23978 MB/s Sep 10 05:22:22.230516 kernel: raid6: using algorithm avx2x2 gen() 28873 MB/s Sep 10 05:22:22.248484 kernel: raid6: .... xor() 18681 MB/s, rmw enabled Sep 10 05:22:22.248500 kernel: raid6: using avx2x2 recovery algorithm Sep 10 05:22:22.268406 kernel: xor: automatically using best checksumming function avx Sep 10 05:22:22.430419 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 05:22:22.439160 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 05:22:22.441829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:22:22.469870 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 10 05:22:22.475394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:22:22.476240 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 05:22:22.504398 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 10 05:22:22.533138 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 05:22:22.536617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 05:22:22.612806 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:22:22.619548 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 05:22:22.664413 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 05:22:22.666407 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 05:22:22.668668 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 05:22:22.671113 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 05:22:22.678075 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 05:22:22.678103 kernel: GPT:9289727 != 19775487 Sep 10 05:22:22.678117 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 05:22:22.678131 kernel: GPT:9289727 != 19775487 Sep 10 05:22:22.678144 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 05:22:22.678158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:22:22.684217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:22:22.686989 kernel: libata version 3.00 loaded. Sep 10 05:22:22.684455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:22.688857 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:22:22.692670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:22:22.696838 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 05:22:22.697017 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 05:22:22.698267 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 10 05:22:22.698437 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 10 05:22:22.700389 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 05:22:22.704612 kernel: AES CTR mode by8 optimization enabled Sep 10 05:22:22.710403 kernel: scsi host0: ahci Sep 10 05:22:22.715901 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 05:22:22.728414 kernel: scsi host1: ahci Sep 10 05:22:22.732422 kernel: scsi host2: ahci Sep 10 05:22:22.732623 kernel: scsi host3: ahci Sep 10 05:22:22.732774 kernel: scsi host4: ahci Sep 10 05:22:22.733411 kernel: scsi host5: ahci Sep 10 05:22:22.735241 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 10 05:22:22.735264 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 10 05:22:22.735275 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 10 05:22:22.735286 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 10 05:22:22.735296 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 10 05:22:22.736448 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 05:22:22.739177 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 10 05:22:22.739699 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 05:22:22.749573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:22.766366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 05:22:22.776786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 05:22:22.779794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 05:22:22.803228 disk-uuid[635]: Primary Header is updated. Sep 10 05:22:22.803228 disk-uuid[635]: Secondary Entries is updated. Sep 10 05:22:22.803228 disk-uuid[635]: Secondary Header is updated. Sep 10 05:22:22.807406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:22:22.811401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:22:23.046418 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 05:22:23.046494 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 05:22:23.047414 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 05:22:23.047428 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 05:22:23.054413 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 05:22:23.054466 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 05:22:23.055410 kernel: ata3.00: LPM support broken, forcing max_power Sep 10 05:22:23.055423 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 05:22:23.055725 kernel: ata3.00: applying bridge limits Sep 10 05:22:23.056862 kernel: ata3.00: LPM support broken, forcing max_power Sep 10 05:22:23.056882 kernel: ata3.00: configured for UDMA/100 Sep 10 05:22:23.059404 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 05:22:23.107408 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 05:22:23.107651 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 05:22:23.133431 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 05:22:23.418611 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 05:22:23.420282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 05:22:23.421936 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:22:23.422146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 05:22:23.423453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 05:22:23.458010 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 05:22:23.812404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:22:23.812757 disk-uuid[636]: The operation has completed successfully. Sep 10 05:22:23.843221 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 05:22:23.843344 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 05:22:23.874286 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 05:22:23.897790 sh[665]: Success Sep 10 05:22:23.915438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 05:22:23.915470 kernel: device-mapper: uevent: version 1.0.3 Sep 10 05:22:23.916483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 05:22:23.925429 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 10 05:22:23.952306 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 05:22:23.956187 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 05:22:23.971609 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 05:22:23.976439 kernel: BTRFS: device fsid d8201365-420d-4e6d-a9af-b12a81c8fc98 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 10 05:22:23.976465 kernel: BTRFS info (device dm-0): first mount of filesystem d8201365-420d-4e6d-a9af-b12a81c8fc98 Sep 10 05:22:23.977568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:22:23.982408 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 05:22:23.982464 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 05:22:23.983212 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 05:22:23.984575 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 05:22:23.985982 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 05:22:23.986766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 05:22:23.988438 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 05:22:24.015407 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 10 05:22:24.017710 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:22:24.017747 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:22:24.020626 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:22:24.020666 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:22:24.025408 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:22:24.026765 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 05:22:24.027828 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 05:22:24.108137 ignition[751]: Ignition 2.22.0 Sep 10 05:22:24.108150 ignition[751]: Stage: fetch-offline Sep 10 05:22:24.108180 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:24.108189 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:24.108264 ignition[751]: parsed url from cmdline: "" Sep 10 05:22:24.108267 ignition[751]: no config URL provided Sep 10 05:22:24.108272 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 05:22:24.108282 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 10 05:22:24.108302 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 10 05:22:24.108307 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 05:22:24.117278 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 10 05:22:24.123115 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 05:22:24.124993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 05:22:24.158844 ignition[751]: parsing config with SHA512: 4f14801d19283f26c9cefce40ad2843ed8f543cd7c2c78867805d0dc9f3b2c863c1129aace35545acdcd8ab8f16f76912796373701bbb25e6ad76e3a77619d3e Sep 10 05:22:24.165968 unknown[751]: fetched base config from "system" Sep 10 05:22:24.165980 unknown[751]: fetched user config from "qemu" Sep 10 05:22:24.166310 ignition[751]: fetch-offline: fetch-offline passed Sep 10 05:22:24.166362 ignition[751]: Ignition finished successfully Sep 10 05:22:24.167695 systemd-networkd[854]: lo: Link UP Sep 10 05:22:24.167700 systemd-networkd[854]: lo: Gained carrier Sep 10 05:22:24.169145 systemd-networkd[854]: Enumeration completed Sep 10 05:22:24.169489 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 05:22:24.169934 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:22:24.169938 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 05:22:24.172047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 05:22:24.172071 systemd-networkd[854]: eth0: Link UP Sep 10 05:22:24.172196 systemd-networkd[854]: eth0: Gained carrier Sep 10 05:22:24.172205 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:22:24.173990 systemd[1]: Reached target network.target - Network. Sep 10 05:22:24.175623 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 05:22:24.176412 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 05:22:24.189488 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 05:22:24.212969 ignition[858]: Ignition 2.22.0 Sep 10 05:22:24.212982 ignition[858]: Stage: kargs Sep 10 05:22:24.213102 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:24.213113 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:24.213794 ignition[858]: kargs: kargs passed Sep 10 05:22:24.213835 ignition[858]: Ignition finished successfully Sep 10 05:22:24.218289 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 05:22:24.219327 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 05:22:24.254460 ignition[867]: Ignition 2.22.0 Sep 10 05:22:24.254470 ignition[867]: Stage: disks Sep 10 05:22:24.254801 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:24.254811 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:24.255482 ignition[867]: disks: disks passed Sep 10 05:22:24.255518 ignition[867]: Ignition finished successfully Sep 10 05:22:24.259478 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 05:22:24.261547 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 05:22:24.261626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 05:22:24.261959 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 05:22:24.262286 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 05:22:24.262792 systemd[1]: Reached target basic.target - Basic System. Sep 10 05:22:24.271020 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 05:22:24.306069 systemd-resolved[269]: Detected conflict on linux IN A 10.0.0.44 Sep 10 05:22:24.306082 systemd-resolved[269]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 10 05:22:24.309387 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 05:22:24.317555 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 05:22:24.321845 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 05:22:24.435417 kernel: EXT4-fs (vda9): mounted filesystem 8812db3a-0650-4908-b2d8-56c2f0883ee2 r/w with ordered data mode. Quota mode: none. Sep 10 05:22:24.436332 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 05:22:24.437771 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 05:22:24.440408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 05:22:24.442135 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 05:22:24.442438 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 05:22:24.442475 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 05:22:24.442496 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 05:22:24.457184 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 05:22:24.459534 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 05:22:24.462433 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 10 05:22:24.464516 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:22:24.464549 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:22:24.467413 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:22:24.467446 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:22:24.469286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 05:22:24.504516 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 05:22:24.509803 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 10 05:22:24.513936 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 05:22:24.518359 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 05:22:24.608265 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 05:22:24.609494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 05:22:24.612535 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 05:22:24.633420 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:22:24.681323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 05:22:24.700653 ignition[1000]: INFO : Ignition 2.22.0 Sep 10 05:22:24.700653 ignition[1000]: INFO : Stage: mount Sep 10 05:22:24.702578 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:24.702578 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:24.705480 ignition[1000]: INFO : mount: mount passed Sep 10 05:22:24.706275 ignition[1000]: INFO : Ignition finished successfully Sep 10 05:22:24.710073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 05:22:24.712593 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 05:22:24.976183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 05:22:24.978499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 05:22:25.008115 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 10 05:22:25.008187 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:22:25.008204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:22:25.011952 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:22:25.011986 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:22:25.013856 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 05:22:25.066473 ignition[1029]: INFO : Ignition 2.22.0 Sep 10 05:22:25.066473 ignition[1029]: INFO : Stage: files Sep 10 05:22:25.068244 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:25.068244 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:25.070941 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 10 05:22:25.072886 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 05:22:25.072886 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 05:22:25.077589 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 05:22:25.079186 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 05:22:25.079186 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 05:22:25.078267 unknown[1029]: wrote ssh authorized keys file for user: core Sep 10 05:22:25.083128 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 05:22:25.083128 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 05:22:25.472220 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 05:22:26.231590 systemd-networkd[854]: eth0: Gained IPv6LL Sep 10 05:22:26.614670 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 05:22:26.617133 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 05:22:26.617133 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 05:22:26.943429 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 05:22:27.331421 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 05:22:27.331421 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 05:22:27.335416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 05:22:27.348051 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 05:22:27.348051 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 05:22:27.348051 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:22:27.353929 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:22:27.353929 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:22:27.353929 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 05:22:27.781707 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 05:22:28.425397 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:22:28.425397 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 05:22:28.741591 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 05:22:29.290406 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 05:22:29.292793 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 05:22:29.320093 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 05:22:29.325096 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 05:22:29.326996 ignition[1029]: INFO : files: files passed Sep 10 05:22:29.326996 ignition[1029]: INFO : Ignition finished successfully Sep 10 05:22:29.340576 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 05:22:29.343700 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 05:22:29.347334 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 05:22:29.367736 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 05:22:29.367913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 05:22:29.373142 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 05:22:29.377361 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:22:29.377361 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:22:29.380664 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:22:29.383797 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 05:22:29.384123 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 05:22:29.388134 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 05:22:29.469523 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 05:22:29.469728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 05:22:29.471460 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 05:22:29.473264 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 05:22:29.473833 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 05:22:29.475041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 05:22:29.505896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 05:22:29.510607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 05:22:29.538888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:22:29.539119 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:22:29.542342 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 05:22:29.543480 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 05:22:29.543662 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 05:22:29.545468 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 05:22:29.546011 systemd[1]: Stopped target basic.target - Basic System. Sep 10 05:22:29.546320 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 05:22:29.546828 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 05:22:29.547131 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 05:22:29.547485 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 05:22:29.547936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 05:22:29.548264 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 05:22:29.548739 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 05:22:29.549056 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 05:22:29.549373 systemd[1]: Stopped target swap.target - Swaps. Sep 10 05:22:29.549841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 05:22:29.549956 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 05:22:29.570614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:22:29.570776 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:22:29.572732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 05:22:29.572879 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:22:29.574866 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 05:22:29.575016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 05:22:29.577158 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 05:22:29.577269 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 05:22:29.580044 systemd[1]: Stopped target paths.target - Path Units. Sep 10 05:22:29.582627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 05:22:29.586465 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:22:29.589115 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 05:22:29.589254 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 05:22:29.590993 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 05:22:29.591098 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 05:22:29.592619 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 05:22:29.592698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 05:22:29.594433 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 05:22:29.594579 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 05:22:29.596730 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 05:22:29.596864 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 05:22:29.602248 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 05:22:29.604071 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 05:22:29.607583 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 05:22:29.608758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:22:29.611195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 05:22:29.612353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 05:22:29.619036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 05:22:29.619154 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 05:22:29.639672 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 05:22:29.644192 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 05:22:29.644354 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 05:22:29.659914 ignition[1084]: INFO : Ignition 2.22.0 Sep 10 05:22:29.659914 ignition[1084]: INFO : Stage: umount Sep 10 05:22:29.661651 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:22:29.661651 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:22:29.661651 ignition[1084]: INFO : umount: umount passed Sep 10 05:22:29.661651 ignition[1084]: INFO : Ignition finished successfully Sep 10 05:22:29.665282 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 05:22:29.665425 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 05:22:29.667219 systemd[1]: Stopped target network.target - Network. Sep 10 05:22:29.668993 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 05:22:29.669042 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 05:22:29.671052 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 05:22:29.671097 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 05:22:29.671987 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 05:22:29.672041 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 05:22:29.672368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 05:22:29.672420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 05:22:29.673008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 05:22:29.673051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 05:22:29.673487 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 05:22:29.674013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 05:22:29.690704 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 05:22:29.690834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 05:22:29.695209 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 05:22:29.695526 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 05:22:29.695668 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 05:22:29.700641 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 05:22:29.701468 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 05:22:29.703216 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 05:22:29.703308 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:22:29.706686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 05:22:29.707681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 05:22:29.707742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 05:22:29.708077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 05:22:29.708155 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:22:29.713911 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 05:22:29.713974 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 05:22:29.716887 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 05:22:29.716951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:22:29.720097 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:22:29.722271 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 05:22:29.722345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 05:22:29.741233 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 05:22:29.746643 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:22:29.749685 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 05:22:29.749822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 05:22:29.752574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 05:22:29.752695 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 05:22:29.753662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 05:22:29.753699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:22:29.753949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 05:22:29.754013 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 05:22:29.758431 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 05:22:29.758564 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 05:22:29.761105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 05:22:29.761180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 05:22:29.764890 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 05:22:29.764994 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 05:22:29.765043 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:22:29.769269 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 05:22:29.769330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:22:29.772526 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 05:22:29.772591 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 05:22:29.776228 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 05:22:29.776284 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:22:29.777648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:22:29.777692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:29.783431 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 05:22:29.783493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 10 05:22:29.783545 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 05:22:29.783612 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 05:22:29.792827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 05:22:29.792957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 05:22:29.794212 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 05:22:29.797853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 05:22:29.818529 systemd[1]: Switching root. Sep 10 05:22:29.859306 systemd-journald[219]: Journal stopped Sep 10 05:22:31.329938 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 10 05:22:31.330002 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 05:22:31.330018 kernel: SELinux: policy capability open_perms=1 Sep 10 05:22:31.330030 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 05:22:31.330041 kernel: SELinux: policy capability always_check_network=0 Sep 10 05:22:31.330052 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 05:22:31.330063 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 05:22:31.330074 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 05:22:31.330085 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 05:22:31.330098 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 05:22:31.330109 kernel: audit: type=1403 audit(1757481750.425:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 05:22:31.330129 systemd[1]: Successfully loaded SELinux policy in 59.729ms. Sep 10 05:22:31.330152 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.364ms. Sep 10 05:22:31.330165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 05:22:31.330178 systemd[1]: Detected virtualization kvm. Sep 10 05:22:31.330190 systemd[1]: Detected architecture x86-64. Sep 10 05:22:31.330201 systemd[1]: Detected first boot. Sep 10 05:22:31.330213 systemd[1]: Initializing machine ID from VM UUID. Sep 10 05:22:31.330227 zram_generator::config[1130]: No configuration found. Sep 10 05:22:31.330245 kernel: Guest personality initialized and is inactive Sep 10 05:22:31.330260 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 10 05:22:31.330271 kernel: Initialized host personality Sep 10 05:22:31.330282 kernel: NET: Registered PF_VSOCK protocol family Sep 10 05:22:31.330295 systemd[1]: Populated /etc with preset unit settings. Sep 10 05:22:31.330307 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 05:22:31.330319 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 05:22:31.330331 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 05:22:31.330345 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 05:22:31.330357 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 05:22:31.330369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 05:22:31.330396 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 05:22:31.330408 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 05:22:31.330422 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 05:22:31.330434 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 05:22:31.330446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 05:22:31.330461 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 05:22:31.330473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:22:31.330485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:22:31.330498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 05:22:31.330530 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 05:22:31.330561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 05:22:31.330586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 05:22:31.330598 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 05:22:31.330612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:22:31.330624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:22:31.330637 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 05:22:31.330649 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 05:22:31.330661 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 05:22:31.330673 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 05:22:31.330685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:22:31.330697 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 05:22:31.330710 systemd[1]: Reached target slices.target - Slice Units. Sep 10 05:22:31.330728 systemd[1]: Reached target swap.target - Swaps. Sep 10 05:22:31.330742 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 05:22:31.330754 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 05:22:31.330767 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 05:22:31.330779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:22:31.330791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 05:22:31.330803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:22:31.330815 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 05:22:31.330827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 05:22:31.330841 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 05:22:31.330853 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 05:22:31.330866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:31.330878 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 05:22:31.330890 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 05:22:31.330902 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 05:22:31.330914 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 05:22:31.330927 systemd[1]: Reached target machines.target - Containers. Sep 10 05:22:31.330941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 05:22:31.330953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:22:31.330965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 05:22:31.330977 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 05:22:31.330989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:22:31.331002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 05:22:31.331014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:22:31.331026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 05:22:31.331037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:22:31.331052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 05:22:31.331064 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 05:22:31.331078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 05:22:31.331097 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 05:22:31.331109 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 05:22:31.331121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:22:31.331133 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 05:22:31.331145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 05:22:31.331159 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 05:22:31.331172 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 05:22:31.331184 kernel: loop: module loaded Sep 10 05:22:31.331195 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 05:22:31.331208 kernel: fuse: init (API version 7.41) Sep 10 05:22:31.331221 kernel: ACPI: bus type drm_connector registered Sep 10 05:22:31.331233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 05:22:31.331245 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 05:22:31.331256 systemd[1]: Stopped verity-setup.service. Sep 10 05:22:31.331270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:31.331282 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 05:22:31.331298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 05:22:31.331310 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 05:22:31.331322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 05:22:31.331333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 05:22:31.331371 systemd-journald[1205]: Collecting audit messages is disabled. Sep 10 05:22:31.331414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 05:22:31.331426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 05:22:31.331442 systemd-journald[1205]: Journal started Sep 10 05:22:31.331465 systemd-journald[1205]: Runtime Journal (/run/log/journal/86dcebe651c9425c8673052611a3b0dd) is 6M, max 48.4M, 42.4M free. Sep 10 05:22:31.046444 systemd[1]: Queued start job for default target multi-user.target. Sep 10 05:22:31.072729 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 05:22:31.073180 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 05:22:31.334527 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 05:22:31.336130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:22:31.337814 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 05:22:31.338043 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 05:22:31.339712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:22:31.339927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:22:31.341522 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 05:22:31.341730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 05:22:31.343179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:22:31.343485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:22:31.345133 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 05:22:31.345340 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 05:22:31.346965 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:22:31.347172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:22:31.348858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 05:22:31.350657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:22:31.352709 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 05:22:31.354899 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 05:22:31.370723 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 05:22:31.373966 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 05:22:31.377542 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 05:22:31.379106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 05:22:31.379157 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 05:22:31.381420 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 05:22:31.385458 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 05:22:31.386824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:22:31.390539 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 05:22:31.393709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 05:22:31.395332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 05:22:31.397679 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 05:22:31.399082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 05:22:31.400628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:22:31.403561 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 05:22:31.411748 systemd-journald[1205]: Time spent on flushing to /var/log/journal/86dcebe651c9425c8673052611a3b0dd is 24.773ms for 1075 entries. Sep 10 05:22:31.411748 systemd-journald[1205]: System Journal (/var/log/journal/86dcebe651c9425c8673052611a3b0dd) is 8M, max 195.6M, 187.6M free. Sep 10 05:22:31.451858 systemd-journald[1205]: Received client request to flush runtime journal. Sep 10 05:22:31.472876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 05:22:31.478634 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:22:31.480561 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 05:22:31.482452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 05:22:31.484615 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 05:22:31.492844 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 05:22:31.495243 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 05:22:31.499177 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 05:22:31.500425 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 05:22:31.518250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:22:31.526457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 05:22:31.541056 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 10 05:22:31.541315 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 10 05:22:31.544198 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 05:22:31.550531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 05:22:31.554511 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 05:22:31.555627 kernel: loop1: detected capacity change from 0 to 128016 Sep 10 05:22:31.615051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 05:22:31.624032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 05:22:31.631425 kernel: loop2: detected capacity change from 0 to 110984 Sep 10 05:22:31.659907 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 10 05:22:31.659928 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 10 05:22:31.663405 kernel: loop3: detected capacity change from 0 to 221472 Sep 10 05:22:31.665151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:22:31.679404 kernel: loop4: detected capacity change from 0 to 128016 Sep 10 05:22:31.759416 kernel: loop5: detected capacity change from 0 to 110984 Sep 10 05:22:31.771889 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 05:22:31.772453 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 10 05:22:31.776869 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 05:22:31.776888 systemd[1]: Reloading... Sep 10 05:22:31.855414 zram_generator::config[1297]: No configuration found. Sep 10 05:22:32.000451 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 05:22:32.111618 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 05:22:32.112154 systemd[1]: Reloading finished in 334 ms. Sep 10 05:22:32.257561 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 05:22:32.259189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 05:22:32.274847 systemd[1]: Starting ensure-sysext.service... Sep 10 05:22:32.277246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 05:22:32.287401 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 10 05:22:32.287416 systemd[1]: Reloading... Sep 10 05:22:32.310244 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 05:22:32.310285 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 05:22:32.310702 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 05:22:32.310998 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 05:22:32.315522 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 05:22:32.315807 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 10 05:22:32.315879 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 10 05:22:32.326238 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 05:22:32.326252 systemd-tmpfiles[1338]: Skipping /boot Sep 10 05:22:32.340404 zram_generator::config[1361]: No configuration found. Sep 10 05:22:32.353313 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 05:22:32.353511 systemd-tmpfiles[1338]: Skipping /boot Sep 10 05:22:32.579603 systemd[1]: Reloading finished in 291 ms. Sep 10 05:22:32.595628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 05:22:32.615873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:22:32.625134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:22:32.627914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 05:22:32.642416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 05:22:32.647419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 05:22:32.650330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:22:32.655048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 05:22:32.662599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:32.662871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:22:32.664223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:22:32.666569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:22:32.683782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:22:32.685096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:22:32.685240 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:22:32.689716 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 05:22:32.690858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:32.692850 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 05:22:32.696277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:22:32.696544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:22:32.698691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:22:32.701727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:22:32.706810 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Sep 10 05:22:32.709180 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:22:32.709682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:22:32.724009 augenrules[1437]: No rules Sep 10 05:22:32.725648 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:22:32.726524 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:22:32.729091 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 05:22:32.737324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:32.743629 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:22:32.745078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:22:32.747010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:22:32.749829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 05:22:32.759330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:22:32.763402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:22:32.763817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:22:32.763982 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:22:32.767807 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 05:22:32.773267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:22:32.774531 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:22:32.776774 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 05:22:32.782578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 05:22:32.789601 systemd[1]: Finished ensure-sysext.service. Sep 10 05:22:32.790829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:22:32.791159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:22:32.796944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:22:32.797168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:22:32.800699 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:22:32.800993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:22:32.808741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 05:22:32.809444 augenrules[1444]: /sbin/augenrules: No change Sep 10 05:22:32.810450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 05:22:32.810534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 05:22:32.812855 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 05:22:32.815421 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 05:22:32.817728 augenrules[1505]: No rules Sep 10 05:22:32.827199 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:22:32.827480 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:22:32.829358 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 05:22:32.829793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 05:22:32.831267 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 05:22:32.871133 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 05:22:32.934009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 05:22:32.937122 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 05:22:32.949404 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 05:22:32.957950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 05:22:32.983604 systemd-resolved[1408]: Positive Trust Anchors: Sep 10 05:22:32.983621 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 05:22:32.983650 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 05:22:32.988685 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 10 05:22:32.992152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 05:22:32.993709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:22:32.998875 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 10 05:22:33.021717 systemd-networkd[1496]: lo: Link UP Sep 10 05:22:33.021730 systemd-networkd[1496]: lo: Gained carrier Sep 10 05:22:33.024526 systemd-networkd[1496]: Enumeration completed Sep 10 05:22:33.024615 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 05:22:33.025266 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:22:33.025277 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 05:22:33.026106 systemd[1]: Reached target network.target - Network. Sep 10 05:22:33.026859 systemd-networkd[1496]: eth0: Link UP Sep 10 05:22:33.029016 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 05:22:33.032394 kernel: ACPI: button: Power Button [PWRF] Sep 10 05:22:33.029876 systemd-networkd[1496]: eth0: Gained carrier Sep 10 05:22:33.029898 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:22:33.034507 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 05:22:33.035769 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 05:22:33.037141 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 05:22:33.038291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 05:22:33.039575 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 05:22:33.040851 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 10 05:22:33.043180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 05:22:33.044452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 05:22:33.044492 systemd[1]: Reached target paths.target - Path Units. Sep 10 05:22:33.044542 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 05:22:33.046590 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 05:22:33.047813 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 05:22:33.049082 systemd[1]: Reached target timers.target - Timer Units. Sep 10 05:22:33.050423 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 05:22:33.050618 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 05:22:33.051325 systemd-timesyncd[1502]: Network configuration changed, trying to establish connection. Sep 10 05:22:34.005490 systemd-timesyncd[1502]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 05:22:34.005540 systemd-timesyncd[1502]: Initial clock synchronization to Wed 2025-09-10 05:22:34.005415 UTC. Sep 10 05:22:34.006453 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 05:22:34.009958 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 10 05:22:34.010033 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 05:22:34.011489 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 05:22:34.012786 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 05:22:34.019878 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 05:22:34.020109 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 05:22:34.020263 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 05:22:34.021009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 05:22:34.022525 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 05:22:34.024779 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 05:22:34.026267 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 05:22:34.029334 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 05:22:34.030332 systemd[1]: Reached target basic.target - Basic System. Sep 10 05:22:34.032691 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 05:22:34.032732 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 05:22:34.033678 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 05:22:34.035900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 05:22:34.039800 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 05:22:34.043713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 05:22:34.046721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 05:22:34.047898 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 05:22:34.054436 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 10 05:22:34.058806 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 05:22:34.061359 jq[1550]: false Sep 10 05:22:34.064801 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 05:22:34.065848 extend-filesystems[1551]: Found /dev/vda6 Sep 10 05:22:34.073003 extend-filesystems[1551]: Found /dev/vda9 Sep 10 05:22:34.081082 extend-filesystems[1551]: Checking size of /dev/vda9 Sep 10 05:22:34.089375 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 05:22:34.092378 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 05:22:34.094114 extend-filesystems[1551]: Resized partition /dev/vda9 Sep 10 05:22:34.096381 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Sep 10 05:22:34.098280 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 10 05:22:34.099108 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 10 05:22:34.102872 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 05:22:34.103643 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 05:22:34.106643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 05:22:34.107149 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 05:22:34.109787 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 05:22:34.110383 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 10 05:22:34.110383 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 05:22:34.110383 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 10 05:22:34.110004 oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 10 05:22:34.110021 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 05:22:34.110070 oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 10 05:22:34.117724 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 05:22:34.126067 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 10 05:22:34.126067 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 05:22:34.125702 oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 10 05:22:34.125714 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 05:22:34.128662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 05:22:34.129054 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 05:22:34.129641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 05:22:34.130634 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 10 05:22:34.130896 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 10 05:22:34.134708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 05:22:34.136236 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 05:22:34.153596 jq[1571]: true Sep 10 05:22:34.156977 update_engine[1570]: I20250910 05:22:34.156908 1570 main.cc:92] Flatcar Update Engine starting Sep 10 05:22:34.161986 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 05:22:34.162597 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 05:22:35.154997 kernel: kvm_amd: TSC scaling supported Sep 10 05:22:35.155075 kernel: kvm_amd: Nested Virtualization enabled Sep 10 05:22:35.155096 kernel: kvm_amd: Nested Paging enabled Sep 10 05:22:35.155110 kernel: kvm_amd: LBR virtualization supported Sep 10 05:22:35.155123 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 05:22:35.155136 kernel: kvm_amd: Virtual GIF supported Sep 10 05:22:35.155149 kernel: EDAC MC: Ver: 3.0.0 Sep 10 05:22:35.155190 update_engine[1570]: I20250910 05:22:34.355010 1570 update_check_scheduler.cc:74] Next update check in 5m11s Sep 10 05:22:34.344529 dbus-daemon[1548]: [system] SELinux support is enabled Sep 10 05:22:35.155530 tar[1580]: linux-amd64/helm Sep 10 05:22:34.163634 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 05:22:34.182505 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 05:22:35.159768 jq[1590]: true Sep 10 05:22:35.159966 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 05:22:35.159966 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 05:22:35.159966 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 05:22:34.222859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:22:35.169090 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Sep 10 05:22:34.302552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:22:34.303451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:34.334105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:22:34.344727 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 05:22:34.350113 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 05:22:34.350137 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 05:22:34.351478 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 05:22:34.351497 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 05:22:34.357360 systemd[1]: Started update-engine.service - Update Engine. Sep 10 05:22:34.361587 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 05:22:34.451184 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 05:22:35.160678 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 05:22:35.161012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 05:22:35.174665 containerd[1592]: time="2025-09-10T05:22:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 05:22:35.175814 containerd[1592]: time="2025-09-10T05:22:35.175759177Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 05:22:35.200600 containerd[1592]: time="2025-09-10T05:22:35.200475489Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.551µs" Sep 10 05:22:35.200600 containerd[1592]: time="2025-09-10T05:22:35.200522356Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 05:22:35.200600 containerd[1592]: time="2025-09-10T05:22:35.200541853Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 05:22:35.200864 containerd[1592]: time="2025-09-10T05:22:35.200791291Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 05:22:35.200864 containerd[1592]: time="2025-09-10T05:22:35.200812831Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 05:22:35.200864 containerd[1592]: time="2025-09-10T05:22:35.200842196Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 05:22:35.200931 containerd[1592]: time="2025-09-10T05:22:35.200916155Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 05:22:35.200931 containerd[1592]: time="2025-09-10T05:22:35.200927677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201261 containerd[1592]: time="2025-09-10T05:22:35.201236696Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201261 containerd[1592]: time="2025-09-10T05:22:35.201255762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201320 containerd[1592]: time="2025-09-10T05:22:35.201266612Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201320 containerd[1592]: time="2025-09-10T05:22:35.201275619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201453 containerd[1592]: time="2025-09-10T05:22:35.201374855Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201711 containerd[1592]: time="2025-09-10T05:22:35.201670660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201711 containerd[1592]: time="2025-09-10T05:22:35.201708481Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 05:22:35.201771 containerd[1592]: time="2025-09-10T05:22:35.201718870Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 05:22:35.201771 containerd[1592]: time="2025-09-10T05:22:35.201762552Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 05:22:35.203703 containerd[1592]: time="2025-09-10T05:22:35.203599507Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 05:22:35.203741 containerd[1592]: time="2025-09-10T05:22:35.203703612Z" level=info msg="metadata content store policy set" policy=shared Sep 10 05:22:35.224755 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Sep 10 05:22:35.224784 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 05:22:35.225331 systemd-logind[1569]: New seat seat0. Sep 10 05:22:35.253676 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 05:22:35.267251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:22:35.462857 tar[1580]: linux-amd64/LICENSE Sep 10 05:22:35.462962 tar[1580]: linux-amd64/README.md Sep 10 05:22:35.484518 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 05:22:35.497539 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 05:22:35.521262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 05:22:35.524039 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 05:22:35.549075 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 05:22:35.549319 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 05:22:35.551933 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 05:22:35.575454 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 05:22:35.578189 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 05:22:35.580245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 05:22:35.581560 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 05:22:35.640603 containerd[1592]: time="2025-09-10T05:22:35.640480325Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 05:22:35.640742 containerd[1592]: time="2025-09-10T05:22:35.640662276Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 05:22:35.640742 containerd[1592]: time="2025-09-10T05:22:35.640683796Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 05:22:35.640742 containerd[1592]: time="2025-09-10T05:22:35.640703784Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 05:22:35.640742 containerd[1592]: time="2025-09-10T05:22:35.640720555Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 05:22:35.640742 containerd[1592]: time="2025-09-10T05:22:35.640736054Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640749309Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640762965Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640781069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640792620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640803080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 05:22:35.640875 containerd[1592]: time="2025-09-10T05:22:35.640820072Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 05:22:35.641039 containerd[1592]: time="2025-09-10T05:22:35.641006201Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 05:22:35.641063 containerd[1592]: time="2025-09-10T05:22:35.641043090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 05:22:35.641063 containerd[1592]: time="2025-09-10T05:22:35.641061044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 05:22:35.641099 containerd[1592]: time="2025-09-10T05:22:35.641074619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 05:22:35.641099 containerd[1592]: time="2025-09-10T05:22:35.641086231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 05:22:35.641135 containerd[1592]: time="2025-09-10T05:22:35.641099736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 05:22:35.641171 containerd[1592]: time="2025-09-10T05:22:35.641133419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 05:22:35.641171 containerd[1592]: time="2025-09-10T05:22:35.641153888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 05:22:35.641171 containerd[1592]: time="2025-09-10T05:22:35.641167884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 05:22:35.641226 containerd[1592]: time="2025-09-10T05:22:35.641178744Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 05:22:35.641226 containerd[1592]: time="2025-09-10T05:22:35.641195837Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 05:22:35.641350 containerd[1592]: time="2025-09-10T05:22:35.641317675Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 05:22:35.641350 containerd[1592]: time="2025-09-10T05:22:35.641340197Z" level=info msg="Start snapshots syncer" Sep 10 05:22:35.641391 containerd[1592]: time="2025-09-10T05:22:35.641374532Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 05:22:35.641793 containerd[1592]: time="2025-09-10T05:22:35.641749455Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 05:22:35.641922 containerd[1592]: time="2025-09-10T05:22:35.641822963Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 05:22:35.641969 containerd[1592]: time="2025-09-10T05:22:35.641942357Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 05:22:35.642133 containerd[1592]: time="2025-09-10T05:22:35.642114069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 05:22:35.642161 containerd[1592]: time="2025-09-10T05:22:35.642146059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 05:22:35.642161 containerd[1592]: time="2025-09-10T05:22:35.642158081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 05:22:35.642161 containerd[1592]: time="2025-09-10T05:22:35.642169342Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 05:22:35.642226 containerd[1592]: time="2025-09-10T05:22:35.642180864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 05:22:35.642226 containerd[1592]: time="2025-09-10T05:22:35.642194429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 05:22:35.642226 containerd[1592]: time="2025-09-10T05:22:35.642211131Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 05:22:35.642289 containerd[1592]: time="2025-09-10T05:22:35.642248871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 05:22:35.642289 containerd[1592]: time="2025-09-10T05:22:35.642263709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 05:22:35.642289 containerd[1592]: time="2025-09-10T05:22:35.642274079Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 05:22:35.642343 containerd[1592]: time="2025-09-10T05:22:35.642306449Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 05:22:35.642343 containerd[1592]: time="2025-09-10T05:22:35.642325685Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 05:22:35.642343 containerd[1592]: time="2025-09-10T05:22:35.642339151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 05:22:35.642404 containerd[1592]: time="2025-09-10T05:22:35.642348739Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 05:22:35.642404 containerd[1592]: time="2025-09-10T05:22:35.642356744Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 05:22:35.642404 containerd[1592]: time="2025-09-10T05:22:35.642368816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 05:22:35.642404 containerd[1592]: time="2025-09-10T05:22:35.642379536Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 05:22:35.642481 containerd[1592]: time="2025-09-10T05:22:35.642410775Z" level=info msg="runtime interface created" Sep 10 05:22:35.642481 containerd[1592]: time="2025-09-10T05:22:35.642416416Z" level=info msg="created NRI interface" Sep 10 05:22:35.642481 containerd[1592]: time="2025-09-10T05:22:35.642424441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 05:22:35.642481 containerd[1592]: time="2025-09-10T05:22:35.642456551Z" level=info msg="Connect containerd service" Sep 10 05:22:35.642556 containerd[1592]: time="2025-09-10T05:22:35.642494121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 05:22:35.643503 containerd[1592]: time="2025-09-10T05:22:35.643476023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 05:22:35.672257 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Sep 10 05:22:35.674749 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 05:22:35.677179 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 05:22:35.875680 containerd[1592]: time="2025-09-10T05:22:35.875519948Z" level=info msg="Start subscribing containerd event" Sep 10 05:22:35.875680 containerd[1592]: time="2025-09-10T05:22:35.875645313Z" level=info msg="Start recovering state" Sep 10 05:22:35.875830 containerd[1592]: time="2025-09-10T05:22:35.875754247Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 05:22:35.875830 containerd[1592]: time="2025-09-10T05:22:35.875828266Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 05:22:35.875873 containerd[1592]: time="2025-09-10T05:22:35.875837163Z" level=info msg="Start event monitor" Sep 10 05:22:35.875873 containerd[1592]: time="2025-09-10T05:22:35.875861248Z" level=info msg="Start cni network conf syncer for default" Sep 10 05:22:35.875910 containerd[1592]: time="2025-09-10T05:22:35.875881446Z" level=info msg="Start streaming server" Sep 10 05:22:35.875910 containerd[1592]: time="2025-09-10T05:22:35.875900061Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 05:22:35.875910 containerd[1592]: time="2025-09-10T05:22:35.875907495Z" level=info msg="runtime interface starting up..." Sep 10 05:22:35.875974 containerd[1592]: time="2025-09-10T05:22:35.875916462Z" level=info msg="starting plugins..." Sep 10 05:22:35.875974 containerd[1592]: time="2025-09-10T05:22:35.875943262Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 05:22:35.876164 containerd[1592]: time="2025-09-10T05:22:35.876145060Z" level=info msg="containerd successfully booted in 0.702070s" Sep 10 05:22:35.876285 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 05:22:35.952833 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 10 05:22:35.956385 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 05:22:35.958175 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 05:22:35.961113 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 05:22:35.963758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:22:35.965962 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 05:22:35.988176 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 05:22:36.000772 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:57636.service - OpenSSH per-connection server daemon (10.0.0.1:57636). Sep 10 05:22:36.003385 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 05:22:36.052516 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 05:22:36.052933 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 05:22:36.055138 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 05:22:36.102967 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 57636 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:36.105154 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:36.112215 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 05:22:36.114742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 05:22:36.122869 systemd-logind[1569]: New session 1 of user core. Sep 10 05:22:36.143985 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 05:22:36.148242 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 05:22:36.169344 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 05:22:36.171753 systemd-logind[1569]: New session c1 of user core. Sep 10 05:22:36.341882 systemd[1695]: Queued start job for default target default.target. Sep 10 05:22:36.353851 systemd[1695]: Created slice app.slice - User Application Slice. Sep 10 05:22:36.353876 systemd[1695]: Reached target paths.target - Paths. Sep 10 05:22:36.353915 systemd[1695]: Reached target timers.target - Timers. Sep 10 05:22:36.355430 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 05:22:36.369632 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 05:22:36.369758 systemd[1695]: Reached target sockets.target - Sockets. Sep 10 05:22:36.369796 systemd[1695]: Reached target basic.target - Basic System. Sep 10 05:22:36.369838 systemd[1695]: Reached target default.target - Main User Target. Sep 10 05:22:36.369873 systemd[1695]: Startup finished in 188ms. Sep 10 05:22:36.370351 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 05:22:36.396731 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 05:22:36.544004 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:57652.service - OpenSSH per-connection server daemon (10.0.0.1:57652). Sep 10 05:22:36.605154 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57652 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:36.606957 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:36.611454 systemd-logind[1569]: New session 2 of user core. Sep 10 05:22:36.626818 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 05:22:36.682467 sshd[1709]: Connection closed by 10.0.0.1 port 57652 Sep 10 05:22:36.682913 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:36.867410 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:57652.service: Deactivated successfully. Sep 10 05:22:36.869494 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 05:22:36.870318 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Sep 10 05:22:36.873367 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:57666.service - OpenSSH per-connection server daemon (10.0.0.1:57666). Sep 10 05:22:36.875652 systemd-logind[1569]: Removed session 2. Sep 10 05:22:36.938554 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 57666 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:36.939798 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:36.944604 systemd-logind[1569]: New session 3 of user core. Sep 10 05:22:36.959755 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 05:22:37.018661 sshd[1718]: Connection closed by 10.0.0.1 port 57666 Sep 10 05:22:37.019038 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:37.025012 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:57666.service: Deactivated successfully. Sep 10 05:22:37.026954 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 05:22:37.027767 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Sep 10 05:22:37.029470 systemd-logind[1569]: Removed session 3. Sep 10 05:22:37.434880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:22:37.449011 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 05:22:37.449408 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 05:22:37.450449 systemd[1]: Startup finished in 3.019s (kernel) + 8.793s (initrd) + 6.130s (userspace) = 17.943s. Sep 10 05:22:38.254962 kubelet[1728]: E0910 05:22:38.254887 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 05:22:38.259002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 05:22:38.259191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 05:22:38.259622 systemd[1]: kubelet.service: Consumed 2.074s CPU time, 267.5M memory peak. Sep 10 05:22:47.042141 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:41376.service - OpenSSH per-connection server daemon (10.0.0.1:41376). Sep 10 05:22:47.100213 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 41376 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.101834 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.106654 systemd-logind[1569]: New session 4 of user core. Sep 10 05:22:47.113718 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 05:22:47.166143 sshd[1745]: Connection closed by 10.0.0.1 port 41376 Sep 10 05:22:47.166527 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:47.175215 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:41376.service: Deactivated successfully. Sep 10 05:22:47.176909 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 05:22:47.177647 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Sep 10 05:22:47.180053 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:41388.service - OpenSSH per-connection server daemon (10.0.0.1:41388). Sep 10 05:22:47.180612 systemd-logind[1569]: Removed session 4. Sep 10 05:22:47.235922 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 41388 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.237455 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.241716 systemd-logind[1569]: New session 5 of user core. Sep 10 05:22:47.251715 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 05:22:47.300999 sshd[1754]: Connection closed by 10.0.0.1 port 41388 Sep 10 05:22:47.301627 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:47.317646 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:41388.service: Deactivated successfully. Sep 10 05:22:47.319625 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 05:22:47.320366 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Sep 10 05:22:47.323179 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Sep 10 05:22:47.323951 systemd-logind[1569]: Removed session 5. Sep 10 05:22:47.381735 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.383456 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.387883 systemd-logind[1569]: New session 6 of user core. Sep 10 05:22:47.397696 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 05:22:47.451620 sshd[1763]: Connection closed by 10.0.0.1 port 41404 Sep 10 05:22:47.451995 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:47.461182 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:41404.service: Deactivated successfully. Sep 10 05:22:47.462959 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 05:22:47.463681 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Sep 10 05:22:47.466173 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:41408.service - OpenSSH per-connection server daemon (10.0.0.1:41408). Sep 10 05:22:47.466755 systemd-logind[1569]: Removed session 6. Sep 10 05:22:47.527930 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 41408 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.529233 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.533433 systemd-logind[1569]: New session 7 of user core. Sep 10 05:22:47.541718 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 05:22:47.598158 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 05:22:47.598461 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:22:47.614843 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 10 05:22:47.616341 sshd[1772]: Connection closed by 10.0.0.1 port 41408 Sep 10 05:22:47.616697 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:47.630038 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:41408.service: Deactivated successfully. Sep 10 05:22:47.631756 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 05:22:47.632432 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Sep 10 05:22:47.634945 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:41424.service - OpenSSH per-connection server daemon (10.0.0.1:41424). Sep 10 05:22:47.635710 systemd-logind[1569]: Removed session 7. Sep 10 05:22:47.680211 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 41424 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.681925 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.686189 systemd-logind[1569]: New session 8 of user core. Sep 10 05:22:47.700703 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 05:22:47.754549 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 05:22:47.754862 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:22:47.764251 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 10 05:22:47.770052 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 05:22:47.770351 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:22:47.779407 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:22:47.826562 augenrules[1806]: No rules Sep 10 05:22:47.828090 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:22:47.828375 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:22:47.829466 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 10 05:22:47.830929 sshd[1782]: Connection closed by 10.0.0.1 port 41424 Sep 10 05:22:47.831285 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 10 05:22:47.842902 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:41424.service: Deactivated successfully. Sep 10 05:22:47.844478 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 05:22:47.845148 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Sep 10 05:22:47.847435 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Sep 10 05:22:47.848092 systemd-logind[1569]: Removed session 8. Sep 10 05:22:47.901530 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:22:47.903158 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:22:47.907414 systemd-logind[1569]: New session 9 of user core. Sep 10 05:22:47.913696 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 05:22:47.966144 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 05:22:47.966473 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:22:48.474531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 05:22:48.476228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:22:48.652749 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 05:22:48.674880 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 05:22:48.768837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:22:48.784966 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 05:22:48.883396 kubelet[1849]: E0910 05:22:48.883321 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 05:22:48.890063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 05:22:48.890256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 05:22:48.890634 systemd[1]: kubelet.service: Consumed 348ms CPU time, 110.8M memory peak. Sep 10 05:22:49.123328 dockerd[1843]: time="2025-09-10T05:22:49.123165622Z" level=info msg="Starting up" Sep 10 05:22:49.124201 dockerd[1843]: time="2025-09-10T05:22:49.124179203Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 05:22:49.144140 dockerd[1843]: time="2025-09-10T05:22:49.144103498Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 05:22:50.318592 dockerd[1843]: time="2025-09-10T05:22:50.318536439Z" level=info msg="Loading containers: start." Sep 10 05:22:50.354602 kernel: Initializing XFRM netlink socket Sep 10 05:22:50.620640 systemd-networkd[1496]: docker0: Link UP Sep 10 05:22:50.701513 dockerd[1843]: time="2025-09-10T05:22:50.701462783Z" level=info msg="Loading containers: done." Sep 10 05:22:50.718870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1614843314-merged.mount: Deactivated successfully. Sep 10 05:22:50.831219 dockerd[1843]: time="2025-09-10T05:22:50.831159420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 05:22:50.831392 dockerd[1843]: time="2025-09-10T05:22:50.831288532Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 05:22:50.831419 dockerd[1843]: time="2025-09-10T05:22:50.831410902Z" level=info msg="Initializing buildkit" Sep 10 05:22:51.291635 dockerd[1843]: time="2025-09-10T05:22:51.291568251Z" level=info msg="Completed buildkit initialization" Sep 10 05:22:51.299804 dockerd[1843]: time="2025-09-10T05:22:51.299733805Z" level=info msg="Daemon has completed initialization" Sep 10 05:22:51.299940 dockerd[1843]: time="2025-09-10T05:22:51.299832850Z" level=info msg="API listen on /run/docker.sock" Sep 10 05:22:51.300034 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 05:22:52.297810 containerd[1592]: time="2025-09-10T05:22:52.297750662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 10 05:22:52.964197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277816791.mount: Deactivated successfully. Sep 10 05:22:54.235502 containerd[1592]: time="2025-09-10T05:22:54.235440102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:54.236173 containerd[1592]: time="2025-09-10T05:22:54.236123513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 10 05:22:54.237189 containerd[1592]: time="2025-09-10T05:22:54.237156471Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:54.239570 containerd[1592]: time="2025-09-10T05:22:54.239520855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:54.240600 containerd[1592]: time="2025-09-10T05:22:54.240551718Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.942756643s" Sep 10 05:22:54.240600 containerd[1592]: time="2025-09-10T05:22:54.240605138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 10 05:22:54.241674 containerd[1592]: time="2025-09-10T05:22:54.241480620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 10 05:22:55.702804 containerd[1592]: time="2025-09-10T05:22:55.702735875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:55.703374 containerd[1592]: time="2025-09-10T05:22:55.703330059Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 10 05:22:55.704598 containerd[1592]: time="2025-09-10T05:22:55.704538185Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:55.707139 containerd[1592]: time="2025-09-10T05:22:55.707077267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:55.707916 containerd[1592]: time="2025-09-10T05:22:55.707872178Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.466365128s" Sep 10 05:22:55.707916 containerd[1592]: time="2025-09-10T05:22:55.707905089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 10 05:22:55.708407 containerd[1592]: time="2025-09-10T05:22:55.708384909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 10 05:22:57.517647 containerd[1592]: time="2025-09-10T05:22:57.517564166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:57.518504 containerd[1592]: time="2025-09-10T05:22:57.518468612Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 10 05:22:57.519788 containerd[1592]: time="2025-09-10T05:22:57.519734847Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:57.522377 containerd[1592]: time="2025-09-10T05:22:57.522349811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:57.523325 containerd[1592]: time="2025-09-10T05:22:57.523270527Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.814858116s" Sep 10 05:22:57.523366 containerd[1592]: time="2025-09-10T05:22:57.523331442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 10 05:22:57.524279 containerd[1592]: time="2025-09-10T05:22:57.524254082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 10 05:22:58.563812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670642161.mount: Deactivated successfully. Sep 10 05:22:58.974644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 05:22:58.976212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:22:59.374689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:22:59.379395 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 05:22:59.421303 kubelet[2158]: E0910 05:22:59.421219 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 05:22:59.425981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 05:22:59.426171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 05:22:59.426550 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.3M memory peak. Sep 10 05:22:59.580123 containerd[1592]: time="2025-09-10T05:22:59.580050344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:59.580959 containerd[1592]: time="2025-09-10T05:22:59.580911188Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 10 05:22:59.582446 containerd[1592]: time="2025-09-10T05:22:59.582404148Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:59.584596 containerd[1592]: time="2025-09-10T05:22:59.584516590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:22:59.585116 containerd[1592]: time="2025-09-10T05:22:59.585073764Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.060789195s" Sep 10 05:22:59.585116 containerd[1592]: time="2025-09-10T05:22:59.585108129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 10 05:22:59.585644 containerd[1592]: time="2025-09-10T05:22:59.585615631Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 05:23:00.184008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894795823.mount: Deactivated successfully. Sep 10 05:23:00.841455 containerd[1592]: time="2025-09-10T05:23:00.841397039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:00.842216 containerd[1592]: time="2025-09-10T05:23:00.842117420Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 05:23:00.843271 containerd[1592]: time="2025-09-10T05:23:00.843221400Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:00.845806 containerd[1592]: time="2025-09-10T05:23:00.845769629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:00.846745 containerd[1592]: time="2025-09-10T05:23:00.846707388Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.261058284s" Sep 10 05:23:00.846745 containerd[1592]: time="2025-09-10T05:23:00.846741582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 05:23:00.847314 containerd[1592]: time="2025-09-10T05:23:00.847282236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 05:23:01.343299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140951109.mount: Deactivated successfully. Sep 10 05:23:01.348857 containerd[1592]: time="2025-09-10T05:23:01.348814879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:23:01.349532 containerd[1592]: time="2025-09-10T05:23:01.349481459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 05:23:01.350688 containerd[1592]: time="2025-09-10T05:23:01.350650822Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:23:01.352585 containerd[1592]: time="2025-09-10T05:23:01.352550905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:23:01.353196 containerd[1592]: time="2025-09-10T05:23:01.353152984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.842886ms" Sep 10 05:23:01.353196 containerd[1592]: time="2025-09-10T05:23:01.353189843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 05:23:01.353708 containerd[1592]: time="2025-09-10T05:23:01.353685443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 05:23:01.896386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524500703.mount: Deactivated successfully. Sep 10 05:23:04.539480 containerd[1592]: time="2025-09-10T05:23:04.539389204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:04.540468 containerd[1592]: time="2025-09-10T05:23:04.540044113Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 10 05:23:04.541257 containerd[1592]: time="2025-09-10T05:23:04.541202976Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:04.543882 containerd[1592]: time="2025-09-10T05:23:04.543843798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:04.544808 containerd[1592]: time="2025-09-10T05:23:04.544751881Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.191037053s" Sep 10 05:23:04.544808 containerd[1592]: time="2025-09-10T05:23:04.544804570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 05:23:06.660229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:23:06.660428 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.3M memory peak. Sep 10 05:23:06.662622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:23:06.685682 systemd[1]: Reload requested from client PID 2308 ('systemctl') (unit session-9.scope)... Sep 10 05:23:06.685699 systemd[1]: Reloading... Sep 10 05:23:06.764604 zram_generator::config[2348]: No configuration found. Sep 10 05:23:07.196637 systemd[1]: Reloading finished in 510 ms. Sep 10 05:23:07.268305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 05:23:07.268403 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 05:23:07.268733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:23:07.268775 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Sep 10 05:23:07.270276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:23:07.439243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:23:07.444141 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 05:23:07.488305 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:23:07.488305 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 05:23:07.488305 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:23:07.488663 kubelet[2399]: I0910 05:23:07.488493 2399 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 05:23:07.668606 kubelet[2399]: I0910 05:23:07.668490 2399 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 05:23:07.668606 kubelet[2399]: I0910 05:23:07.668522 2399 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 05:23:07.669088 kubelet[2399]: I0910 05:23:07.669065 2399 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 05:23:07.688855 kubelet[2399]: E0910 05:23:07.688797 2399 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:07.689653 kubelet[2399]: I0910 05:23:07.689614 2399 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 05:23:07.696285 kubelet[2399]: I0910 05:23:07.696259 2399 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 05:23:07.702108 kubelet[2399]: I0910 05:23:07.702089 2399 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 05:23:07.702672 kubelet[2399]: I0910 05:23:07.702644 2399 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 05:23:07.702836 kubelet[2399]: I0910 05:23:07.702796 2399 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 05:23:07.703017 kubelet[2399]: I0910 05:23:07.702827 2399 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 05:23:07.703119 kubelet[2399]: I0910 05:23:07.703035 2399 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 05:23:07.703119 kubelet[2399]: I0910 05:23:07.703044 2399 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 05:23:07.703189 kubelet[2399]: I0910 05:23:07.703175 2399 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:23:07.706215 kubelet[2399]: I0910 05:23:07.706187 2399 kubelet.go:408] "Attempting to sync node with API server" Sep 10 05:23:07.706292 kubelet[2399]: I0910 05:23:07.706251 2399 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 05:23:07.706335 kubelet[2399]: I0910 05:23:07.706318 2399 kubelet.go:314] "Adding apiserver pod source" Sep 10 05:23:07.706374 kubelet[2399]: I0910 05:23:07.706350 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 05:23:07.710714 kubelet[2399]: I0910 05:23:07.710693 2399 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 05:23:07.711612 kubelet[2399]: W0910 05:23:07.711019 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:07.711612 kubelet[2399]: E0910 05:23:07.711107 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:07.711612 kubelet[2399]: W0910 05:23:07.711189 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:07.711612 kubelet[2399]: E0910 05:23:07.711242 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:07.711612 kubelet[2399]: I0910 05:23:07.711268 2399 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 05:23:07.711612 kubelet[2399]: W0910 05:23:07.711353 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 05:23:07.714596 kubelet[2399]: I0910 05:23:07.713699 2399 server.go:1274] "Started kubelet" Sep 10 05:23:07.714596 kubelet[2399]: I0910 05:23:07.714436 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 05:23:07.715014 kubelet[2399]: I0910 05:23:07.714998 2399 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 05:23:07.715129 kubelet[2399]: I0910 05:23:07.715109 2399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 05:23:07.715954 kubelet[2399]: I0910 05:23:07.715928 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 05:23:07.716320 kubelet[2399]: I0910 05:23:07.716306 2399 server.go:449] "Adding debug handlers to kubelet server" Sep 10 05:23:07.719106 kubelet[2399]: I0910 05:23:07.719090 2399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 05:23:07.720902 kubelet[2399]: I0910 05:23:07.720873 2399 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 05:23:07.721015 kubelet[2399]: I0910 05:23:07.720996 2399 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 05:23:07.721101 kubelet[2399]: I0910 05:23:07.721082 2399 reconciler.go:26] "Reconciler: start to sync state" Sep 10 05:23:07.721473 kubelet[2399]: W0910 05:23:07.721432 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:07.721517 kubelet[2399]: E0910 05:23:07.721478 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:07.721607 kubelet[2399]: E0910 05:23:07.720677 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863d45fd7e42e38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 05:23:07.71366252 +0000 UTC m=+0.265440406,LastTimestamp:2025-09-10 05:23:07.71366252 +0000 UTC m=+0.265440406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 05:23:07.721677 kubelet[2399]: E0910 05:23:07.721660 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:07.722053 kubelet[2399]: E0910 05:23:07.722021 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Sep 10 05:23:07.722466 kubelet[2399]: I0910 05:23:07.722442 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 05:23:07.722844 kubelet[2399]: E0910 05:23:07.722820 2399 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 05:23:07.723499 kubelet[2399]: I0910 05:23:07.723479 2399 factory.go:221] Registration of the containerd container factory successfully Sep 10 05:23:07.723499 kubelet[2399]: I0910 05:23:07.723493 2399 factory.go:221] Registration of the systemd container factory successfully Sep 10 05:23:07.734095 kubelet[2399]: I0910 05:23:07.733241 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 05:23:07.735197 kubelet[2399]: I0910 05:23:07.735167 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 05:23:07.735244 kubelet[2399]: I0910 05:23:07.735211 2399 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 05:23:07.735244 kubelet[2399]: I0910 05:23:07.735238 2399 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 05:23:07.735315 kubelet[2399]: E0910 05:23:07.735276 2399 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 05:23:07.739655 kubelet[2399]: W0910 05:23:07.738401 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:07.739655 kubelet[2399]: E0910 05:23:07.738464 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:07.744541 kubelet[2399]: I0910 05:23:07.744513 2399 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 05:23:07.744541 kubelet[2399]: I0910 05:23:07.744530 2399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 05:23:07.744622 kubelet[2399]: I0910 05:23:07.744549 2399 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:23:07.821837 kubelet[2399]: E0910 05:23:07.821780 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:07.836103 kubelet[2399]: E0910 05:23:07.836055 2399 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 05:23:07.922470 kubelet[2399]: E0910 05:23:07.922411 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:07.922875 kubelet[2399]: E0910 05:23:07.922838 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Sep 10 05:23:08.022876 kubelet[2399]: E0910 05:23:08.022723 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:08.036932 kubelet[2399]: E0910 05:23:08.036882 2399 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 05:23:08.123243 kubelet[2399]: E0910 05:23:08.123204 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:08.150439 kubelet[2399]: I0910 05:23:08.150383 2399 policy_none.go:49] "None policy: Start" Sep 10 05:23:08.151199 kubelet[2399]: I0910 05:23:08.151167 2399 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 05:23:08.151199 kubelet[2399]: I0910 05:23:08.151200 2399 state_mem.go:35] "Initializing new in-memory state store" Sep 10 05:23:08.160650 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 05:23:08.173285 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 05:23:08.176864 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 05:23:08.199653 kubelet[2399]: I0910 05:23:08.199608 2399 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 05:23:08.199911 kubelet[2399]: I0910 05:23:08.199850 2399 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 05:23:08.199911 kubelet[2399]: I0910 05:23:08.199874 2399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 05:23:08.200254 kubelet[2399]: I0910 05:23:08.200128 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 05:23:08.201541 kubelet[2399]: E0910 05:23:08.201512 2399 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 05:23:08.301656 kubelet[2399]: I0910 05:23:08.301527 2399 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:23:08.301953 kubelet[2399]: E0910 05:23:08.301929 2399 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 10 05:23:08.323689 kubelet[2399]: E0910 05:23:08.323634 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Sep 10 05:23:08.446493 systemd[1]: Created slice kubepods-burstable-podde9a0d47e25f9ebf42400ef095b4277c.slice - libcontainer container kubepods-burstable-podde9a0d47e25f9ebf42400ef095b4277c.slice. Sep 10 05:23:08.481941 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 10 05:23:08.498915 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 10 05:23:08.504017 kubelet[2399]: I0910 05:23:08.503962 2399 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:23:08.504610 kubelet[2399]: E0910 05:23:08.504404 2399 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 10 05:23:08.525936 kubelet[2399]: I0910 05:23:08.525879 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:08.525936 kubelet[2399]: I0910 05:23:08.525923 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:08.526099 kubelet[2399]: I0910 05:23:08.525953 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:08.526099 kubelet[2399]: I0910 05:23:08.525987 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:08.526099 kubelet[2399]: I0910 05:23:08.526019 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 05:23:08.526099 kubelet[2399]: I0910 05:23:08.526056 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:08.526099 kubelet[2399]: I0910 05:23:08.526086 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:08.526312 kubelet[2399]: I0910 05:23:08.526113 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:08.526312 kubelet[2399]: I0910 05:23:08.526143 2399 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:08.648126 kubelet[2399]: W0910 05:23:08.647985 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:08.648126 kubelet[2399]: E0910 05:23:08.648034 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:08.688007 kubelet[2399]: W0910 05:23:08.687922 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:08.688148 kubelet[2399]: E0910 05:23:08.688011 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:08.780066 kubelet[2399]: E0910 05:23:08.780034 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:08.780834 containerd[1592]: time="2025-09-10T05:23:08.780779198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de9a0d47e25f9ebf42400ef095b4277c,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:08.794954 kubelet[2399]: E0910 05:23:08.794909 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:08.795363 containerd[1592]: time="2025-09-10T05:23:08.795312390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:08.802636 kubelet[2399]: E0910 05:23:08.802606 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:08.803081 containerd[1592]: time="2025-09-10T05:23:08.803026968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:08.905662 kubelet[2399]: I0910 05:23:08.905593 2399 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:23:08.906045 kubelet[2399]: E0910 05:23:08.905968 2399 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 10 05:23:08.977066 kubelet[2399]: W0910 05:23:08.976986 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:08.977066 kubelet[2399]: E0910 05:23:08.977059 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:09.007824 containerd[1592]: time="2025-09-10T05:23:09.007758146Z" level=info msg="connecting to shim 88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5" address="unix:///run/containerd/s/a064d264290a9ec581ced8622c1a7adce7557fcdc4343c241a8bf530dcb4178d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:09.018420 containerd[1592]: time="2025-09-10T05:23:09.018342630Z" level=info msg="connecting to shim 024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690" address="unix:///run/containerd/s/d69c6a69f700a69aefddf647b9544c873a22a6076422055e63b329902048db6a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:09.040009 containerd[1592]: time="2025-09-10T05:23:09.039112810Z" level=info msg="connecting to shim e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486" address="unix:///run/containerd/s/0a82a313a0c70e71f9b5e3183d0f8ce20b1d5faccae4bcd1db7e303c0e8a4d12" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:09.057884 systemd[1]: Started cri-containerd-88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5.scope - libcontainer container 88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5. Sep 10 05:23:09.062457 systemd[1]: Started cri-containerd-e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486.scope - libcontainer container e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486. Sep 10 05:23:09.067219 systemd[1]: Started cri-containerd-024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690.scope - libcontainer container 024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690. Sep 10 05:23:09.120212 containerd[1592]: time="2025-09-10T05:23:09.120154515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de9a0d47e25f9ebf42400ef095b4277c,Namespace:kube-system,Attempt:0,} returns sandbox id \"88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5\"" Sep 10 05:23:09.121419 kubelet[2399]: E0910 05:23:09.121399 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:09.123470 containerd[1592]: time="2025-09-10T05:23:09.123438078Z" level=info msg="CreateContainer within sandbox \"88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 05:23:09.124008 kubelet[2399]: E0910 05:23:09.123982 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" Sep 10 05:23:09.125032 containerd[1592]: time="2025-09-10T05:23:09.124931951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690\"" Sep 10 05:23:09.126751 kubelet[2399]: E0910 05:23:09.126712 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:09.128762 containerd[1592]: time="2025-09-10T05:23:09.128707631Z" level=info msg="CreateContainer within sandbox \"024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 05:23:09.132223 containerd[1592]: time="2025-09-10T05:23:09.132190358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486\"" Sep 10 05:23:09.132763 kubelet[2399]: E0910 05:23:09.132738 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:09.134745 containerd[1592]: time="2025-09-10T05:23:09.134700905Z" level=info msg="CreateContainer within sandbox \"e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 05:23:09.142762 containerd[1592]: time="2025-09-10T05:23:09.142727589Z" level=info msg="Container a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:09.147600 containerd[1592]: time="2025-09-10T05:23:09.147562998Z" level=info msg="Container 1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:09.148647 containerd[1592]: time="2025-09-10T05:23:09.148616974Z" level=info msg="Container b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:09.156937 containerd[1592]: time="2025-09-10T05:23:09.156850166Z" level=info msg="CreateContainer within sandbox \"024cd946050f258bcc8ef7e408b08c73e33cd9e59c0c6aed5d15e295bb05e690\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74\"" Sep 10 05:23:09.157343 containerd[1592]: time="2025-09-10T05:23:09.157306833Z" level=info msg="StartContainer for \"1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74\"" Sep 10 05:23:09.158526 containerd[1592]: time="2025-09-10T05:23:09.158494537Z" level=info msg="connecting to shim 1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74" address="unix:///run/containerd/s/d69c6a69f700a69aefddf647b9544c873a22a6076422055e63b329902048db6a" protocol=ttrpc version=3 Sep 10 05:23:09.160895 containerd[1592]: time="2025-09-10T05:23:09.160860446Z" level=info msg="CreateContainer within sandbox \"88f4f278acce2764638a5c72ad0fa371561239ba8c8b890dd42c21b25de228f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea\"" Sep 10 05:23:09.161224 containerd[1592]: time="2025-09-10T05:23:09.161206763Z" level=info msg="StartContainer for \"a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea\"" Sep 10 05:23:09.162150 containerd[1592]: time="2025-09-10T05:23:09.162124858Z" level=info msg="connecting to shim a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea" address="unix:///run/containerd/s/a064d264290a9ec581ced8622c1a7adce7557fcdc4343c241a8bf530dcb4178d" protocol=ttrpc version=3 Sep 10 05:23:09.164051 containerd[1592]: time="2025-09-10T05:23:09.164021365Z" level=info msg="CreateContainer within sandbox \"e3c87dc431f60d730baba9a23a174db9fa6dfe499c8e7e4a9b6f8ffb327ad486\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a\"" Sep 10 05:23:09.164863 containerd[1592]: time="2025-09-10T05:23:09.164843766Z" level=info msg="StartContainer for \"b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a\"" Sep 10 05:23:09.165724 containerd[1592]: time="2025-09-10T05:23:09.165704952Z" level=info msg="connecting to shim b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a" address="unix:///run/containerd/s/0a82a313a0c70e71f9b5e3183d0f8ce20b1d5faccae4bcd1db7e303c0e8a4d12" protocol=ttrpc version=3 Sep 10 05:23:09.182721 systemd[1]: Started cri-containerd-a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea.scope - libcontainer container a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea. Sep 10 05:23:09.187112 systemd[1]: Started cri-containerd-1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74.scope - libcontainer container 1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74. Sep 10 05:23:09.188938 systemd[1]: Started cri-containerd-b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a.scope - libcontainer container b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a. Sep 10 05:23:09.219301 kubelet[2399]: W0910 05:23:09.219213 2399 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 10 05:23:09.219301 kubelet[2399]: E0910 05:23:09.219274 2399 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:23:09.249624 containerd[1592]: time="2025-09-10T05:23:09.249538563Z" level=info msg="StartContainer for \"a76be2e0d717c38a55ea152ccc829d85145e8f4365a37c479a2d40220b9f06ea\" returns successfully" Sep 10 05:23:09.255507 containerd[1592]: time="2025-09-10T05:23:09.255432377Z" level=info msg="StartContainer for \"1f80bf26a00e6d83d447cc23a53ff0a088d1d50d9b7a1d0124c26c9f41bcfc74\" returns successfully" Sep 10 05:23:09.264303 containerd[1592]: time="2025-09-10T05:23:09.264152925Z" level=info msg="StartContainer for \"b5f519dcd01ddafff3e6d75650625066b705ef659697673516cbdc22e1a19a5a\" returns successfully" Sep 10 05:23:09.726605 kubelet[2399]: I0910 05:23:09.726482 2399 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:23:09.751201 kubelet[2399]: E0910 05:23:09.751159 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:09.752517 kubelet[2399]: E0910 05:23:09.752495 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:09.753306 kubelet[2399]: E0910 05:23:09.753279 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:10.728893 kubelet[2399]: E0910 05:23:10.728842 2399 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 05:23:10.755060 kubelet[2399]: E0910 05:23:10.755033 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:10.819828 kubelet[2399]: I0910 05:23:10.819785 2399 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 05:23:10.819828 kubelet[2399]: E0910 05:23:10.819826 2399 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 05:23:10.828621 kubelet[2399]: E0910 05:23:10.828568 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:10.928997 kubelet[2399]: E0910 05:23:10.928953 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.029632 kubelet[2399]: E0910 05:23:11.029536 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.030992 kubelet[2399]: E0910 05:23:11.030973 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:11.129895 kubelet[2399]: E0910 05:23:11.129832 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.230225 kubelet[2399]: E0910 05:23:11.230181 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.330917 kubelet[2399]: E0910 05:23:11.330781 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.431816 kubelet[2399]: E0910 05:23:11.431761 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.532303 kubelet[2399]: E0910 05:23:11.532262 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.632871 kubelet[2399]: E0910 05:23:11.632758 2399 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:11.724923 kubelet[2399]: I0910 05:23:11.724880 2399 apiserver.go:52] "Watching apiserver" Sep 10 05:23:11.821986 kubelet[2399]: I0910 05:23:11.821935 2399 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 05:23:12.927468 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-9.scope)... Sep 10 05:23:12.927482 systemd[1]: Reloading... Sep 10 05:23:13.000609 zram_generator::config[2718]: No configuration found. Sep 10 05:23:13.246048 systemd[1]: Reloading finished in 318 ms. Sep 10 05:23:13.269467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:23:13.285692 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 05:23:13.286016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:23:13.286073 systemd[1]: kubelet.service: Consumed 762ms CPU time, 130.6M memory peak. Sep 10 05:23:13.287890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:23:13.515778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:23:13.520011 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 05:23:13.571462 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:23:13.571462 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 05:23:13.571462 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:23:13.571871 kubelet[2763]: I0910 05:23:13.571543 2763 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 05:23:13.579161 kubelet[2763]: I0910 05:23:13.579083 2763 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 05:23:13.579161 kubelet[2763]: I0910 05:23:13.579109 2763 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 05:23:13.579640 kubelet[2763]: I0910 05:23:13.579621 2763 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 05:23:13.580873 kubelet[2763]: I0910 05:23:13.580849 2763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 05:23:13.582809 kubelet[2763]: I0910 05:23:13.582771 2763 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 05:23:13.586502 kubelet[2763]: I0910 05:23:13.586457 2763 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 05:23:13.590622 kubelet[2763]: I0910 05:23:13.590574 2763 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 05:23:13.590721 kubelet[2763]: I0910 05:23:13.590694 2763 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 05:23:13.590876 kubelet[2763]: I0910 05:23:13.590837 2763 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 05:23:13.591041 kubelet[2763]: I0910 05:23:13.590864 2763 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 05:23:13.591116 kubelet[2763]: I0910 05:23:13.591054 2763 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 05:23:13.591116 kubelet[2763]: I0910 05:23:13.591063 2763 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 05:23:13.591116 kubelet[2763]: I0910 05:23:13.591107 2763 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:23:13.591235 kubelet[2763]: I0910 05:23:13.591218 2763 kubelet.go:408] "Attempting to sync node with API server" Sep 10 05:23:13.591235 kubelet[2763]: I0910 05:23:13.591233 2763 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 05:23:13.591281 kubelet[2763]: I0910 05:23:13.591270 2763 kubelet.go:314] "Adding apiserver pod source" Sep 10 05:23:13.591307 kubelet[2763]: I0910 05:23:13.591282 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 05:23:13.595476 kubelet[2763]: I0910 05:23:13.595445 2763 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 05:23:13.597609 kubelet[2763]: I0910 05:23:13.596207 2763 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 05:23:13.597609 kubelet[2763]: I0910 05:23:13.596859 2763 server.go:1274] "Started kubelet" Sep 10 05:23:13.599467 kubelet[2763]: I0910 05:23:13.599413 2763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 05:23:13.599818 kubelet[2763]: I0910 05:23:13.599782 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 05:23:13.600415 kubelet[2763]: I0910 05:23:13.600394 2763 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 05:23:13.601620 kubelet[2763]: I0910 05:23:13.601552 2763 server.go:449] "Adding debug handlers to kubelet server" Sep 10 05:23:13.601704 kubelet[2763]: I0910 05:23:13.601659 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 05:23:13.602619 kubelet[2763]: I0910 05:23:13.602452 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 05:23:13.604110 kubelet[2763]: I0910 05:23:13.604087 2763 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 05:23:13.604204 kubelet[2763]: I0910 05:23:13.604187 2763 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 05:23:13.604326 kubelet[2763]: I0910 05:23:13.604306 2763 reconciler.go:26] "Reconciler: start to sync state" Sep 10 05:23:13.604893 kubelet[2763]: I0910 05:23:13.604865 2763 factory.go:221] Registration of the systemd container factory successfully Sep 10 05:23:13.605050 kubelet[2763]: I0910 05:23:13.604960 2763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 05:23:13.608370 kubelet[2763]: E0910 05:23:13.608343 2763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:23:13.613731 kubelet[2763]: E0910 05:23:13.613701 2763 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 05:23:13.615805 kubelet[2763]: I0910 05:23:13.615785 2763 factory.go:221] Registration of the containerd container factory successfully Sep 10 05:23:13.618031 kubelet[2763]: I0910 05:23:13.617906 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 05:23:13.620195 kubelet[2763]: I0910 05:23:13.620159 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 05:23:13.620465 kubelet[2763]: I0910 05:23:13.620435 2763 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 05:23:13.620694 kubelet[2763]: I0910 05:23:13.620628 2763 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 05:23:13.620803 kubelet[2763]: E0910 05:23:13.620752 2763 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 05:23:13.652483 kubelet[2763]: I0910 05:23:13.652451 2763 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 05:23:13.652483 kubelet[2763]: I0910 05:23:13.652468 2763 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 05:23:13.652483 kubelet[2763]: I0910 05:23:13.652490 2763 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:23:13.652667 kubelet[2763]: I0910 05:23:13.652648 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 05:23:13.652692 kubelet[2763]: I0910 05:23:13.652658 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 05:23:13.652692 kubelet[2763]: I0910 05:23:13.652680 2763 policy_none.go:49] "None policy: Start" Sep 10 05:23:13.653206 kubelet[2763]: I0910 05:23:13.653188 2763 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 05:23:13.653252 kubelet[2763]: I0910 05:23:13.653219 2763 state_mem.go:35] "Initializing new in-memory state store" Sep 10 05:23:13.653370 kubelet[2763]: I0910 05:23:13.653359 2763 state_mem.go:75] "Updated machine memory state" Sep 10 05:23:13.657594 kubelet[2763]: I0910 05:23:13.657475 2763 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 05:23:13.657702 kubelet[2763]: I0910 05:23:13.657687 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 05:23:13.657747 kubelet[2763]: I0910 05:23:13.657703 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 05:23:13.657928 kubelet[2763]: I0910 05:23:13.657909 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 05:23:13.762951 kubelet[2763]: I0910 05:23:13.762902 2763 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:23:13.769594 kubelet[2763]: I0910 05:23:13.768928 2763 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 05:23:13.769594 kubelet[2763]: I0910 05:23:13.768986 2763 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 05:23:13.880417 sudo[2797]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 05:23:13.880762 sudo[2797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 05:23:13.905917 kubelet[2763]: I0910 05:23:13.905880 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:13.905917 kubelet[2763]: I0910 05:23:13.905916 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:13.906041 kubelet[2763]: I0910 05:23:13.905940 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 05:23:13.906041 kubelet[2763]: I0910 05:23:13.905958 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:13.906041 kubelet[2763]: I0910 05:23:13.905973 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de9a0d47e25f9ebf42400ef095b4277c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de9a0d47e25f9ebf42400ef095b4277c\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:13.906041 kubelet[2763]: I0910 05:23:13.905988 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:13.906041 kubelet[2763]: I0910 05:23:13.906004 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:13.906186 kubelet[2763]: I0910 05:23:13.906021 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:13.906186 kubelet[2763]: I0910 05:23:13.906038 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:23:14.029114 kubelet[2763]: E0910 05:23:14.028979 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.029114 kubelet[2763]: E0910 05:23:14.028995 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.029233 kubelet[2763]: E0910 05:23:14.029141 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.282152 sudo[2797]: pam_unix(sudo:session): session closed for user root Sep 10 05:23:14.591855 kubelet[2763]: I0910 05:23:14.591741 2763 apiserver.go:52] "Watching apiserver" Sep 10 05:23:14.605291 kubelet[2763]: I0910 05:23:14.605269 2763 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 05:23:14.634602 kubelet[2763]: E0910 05:23:14.634245 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.634602 kubelet[2763]: E0910 05:23:14.634342 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.640366 kubelet[2763]: E0910 05:23:14.640316 2763 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 05:23:14.640522 kubelet[2763]: E0910 05:23:14.640484 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:14.662062 kubelet[2763]: I0910 05:23:14.661997 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.661972015 podStartE2EDuration="1.661972015s" podCreationTimestamp="2025-09-10 05:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:14.655756093 +0000 UTC m=+1.130853542" watchObservedRunningTime="2025-09-10 05:23:14.661972015 +0000 UTC m=+1.137069464" Sep 10 05:23:14.669135 kubelet[2763]: I0910 05:23:14.669030 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.669016337 podStartE2EDuration="1.669016337s" podCreationTimestamp="2025-09-10 05:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:14.662242571 +0000 UTC m=+1.137340020" watchObservedRunningTime="2025-09-10 05:23:14.669016337 +0000 UTC m=+1.144113786" Sep 10 05:23:14.669210 kubelet[2763]: I0910 05:23:14.669155 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.669150403 podStartE2EDuration="1.669150403s" podCreationTimestamp="2025-09-10 05:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:14.668905025 +0000 UTC m=+1.144002484" watchObservedRunningTime="2025-09-10 05:23:14.669150403 +0000 UTC m=+1.144247852" Sep 10 05:23:15.636492 kubelet[2763]: E0910 05:23:15.635617 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:15.636492 kubelet[2763]: E0910 05:23:15.635829 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:15.636492 kubelet[2763]: E0910 05:23:15.635926 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:15.844310 sudo[1819]: pam_unix(sudo:session): session closed for user root Sep 10 05:23:15.845872 sshd[1818]: Connection closed by 10.0.0.1 port 41438 Sep 10 05:23:15.846342 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:15.850948 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:41438.service: Deactivated successfully. Sep 10 05:23:15.853212 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 05:23:15.853416 systemd[1]: session-9.scope: Consumed 4.596s CPU time, 265.8M memory peak. Sep 10 05:23:15.854948 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Sep 10 05:23:15.855994 systemd-logind[1569]: Removed session 9. Sep 10 05:23:17.870328 kubelet[2763]: I0910 05:23:17.870300 2763 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 05:23:17.871022 containerd[1592]: time="2025-09-10T05:23:17.870851279Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 05:23:17.871716 kubelet[2763]: I0910 05:23:17.871434 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 05:23:18.846933 systemd[1]: Created slice kubepods-besteffort-podc86b65ba_f577_43eb_b2eb_5d52aeeeb089.slice - libcontainer container kubepods-besteffort-podc86b65ba_f577_43eb_b2eb_5d52aeeeb089.slice. Sep 10 05:23:18.860184 systemd[1]: Created slice kubepods-burstable-pod6b29174e_7e3e_438f_8c0a_fab5f153bb41.slice - libcontainer container kubepods-burstable-pod6b29174e_7e3e_438f_8c0a_fab5f153bb41.slice. Sep 10 05:23:18.936482 kubelet[2763]: I0910 05:23:18.936423 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c86b65ba-f577-43eb-b2eb-5d52aeeeb089-kube-proxy\") pod \"kube-proxy-fbczk\" (UID: \"c86b65ba-f577-43eb-b2eb-5d52aeeeb089\") " pod="kube-system/kube-proxy-fbczk" Sep 10 05:23:18.936983 kubelet[2763]: I0910 05:23:18.936510 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-run\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.936983 kubelet[2763]: I0910 05:23:18.936534 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-config-path\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.936983 kubelet[2763]: I0910 05:23:18.936606 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-kernel\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.936983 kubelet[2763]: I0910 05:23:18.936624 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2tm\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-kube-api-access-2q2tm\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.936983 kubelet[2763]: I0910 05:23:18.936681 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-cgroup\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936699 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86b65ba-f577-43eb-b2eb-5d52aeeeb089-lib-modules\") pod \"kube-proxy-fbczk\" (UID: \"c86b65ba-f577-43eb-b2eb-5d52aeeeb089\") " pod="kube-system/kube-proxy-fbczk" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936752 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xp7k\" (UniqueName: \"kubernetes.io/projected/c86b65ba-f577-43eb-b2eb-5d52aeeeb089-kube-api-access-8xp7k\") pod \"kube-proxy-fbczk\" (UID: \"c86b65ba-f577-43eb-b2eb-5d52aeeeb089\") " pod="kube-system/kube-proxy-fbczk" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936774 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-lib-modules\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936792 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-xtables-lock\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936850 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-net\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937188 kubelet[2763]: I0910 05:23:18.936873 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cni-path\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.936921 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-etc-cni-netd\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.936941 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b29174e-7e3e-438f-8c0a-fab5f153bb41-clustermesh-secrets\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.936955 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86b65ba-f577-43eb-b2eb-5d52aeeeb089-xtables-lock\") pod \"kube-proxy-fbczk\" (UID: \"c86b65ba-f577-43eb-b2eb-5d52aeeeb089\") " pod="kube-system/kube-proxy-fbczk" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.937010 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-bpf-maps\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.937102 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hubble-tls\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.937376 kubelet[2763]: I0910 05:23:18.937158 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hostproc\") pod \"cilium-zmxnb\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " pod="kube-system/cilium-zmxnb" Sep 10 05:23:18.953119 systemd[1]: Created slice kubepods-besteffort-podadf6ea14_3192_4e33_8562_1c912a463a9c.slice - libcontainer container kubepods-besteffort-podadf6ea14_3192_4e33_8562_1c912a463a9c.slice. Sep 10 05:23:19.039609 kubelet[2763]: I0910 05:23:19.038359 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adf6ea14-3192-4e33-8562-1c912a463a9c-cilium-config-path\") pod \"cilium-operator-5d85765b45-49jkn\" (UID: \"adf6ea14-3192-4e33-8562-1c912a463a9c\") " pod="kube-system/cilium-operator-5d85765b45-49jkn" Sep 10 05:23:19.039609 kubelet[2763]: I0910 05:23:19.038437 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnx92\" (UniqueName: \"kubernetes.io/projected/adf6ea14-3192-4e33-8562-1c912a463a9c-kube-api-access-qnx92\") pod \"cilium-operator-5d85765b45-49jkn\" (UID: \"adf6ea14-3192-4e33-8562-1c912a463a9c\") " pod="kube-system/cilium-operator-5d85765b45-49jkn" Sep 10 05:23:19.157337 kubelet[2763]: E0910 05:23:19.157234 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.157831 containerd[1592]: time="2025-09-10T05:23:19.157784209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbczk,Uid:c86b65ba-f577-43eb-b2eb-5d52aeeeb089,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:19.163900 kubelet[2763]: E0910 05:23:19.163871 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.164271 containerd[1592]: time="2025-09-10T05:23:19.164221823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmxnb,Uid:6b29174e-7e3e-438f-8c0a-fab5f153bb41,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:19.257206 kubelet[2763]: E0910 05:23:19.257133 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.257711 containerd[1592]: time="2025-09-10T05:23:19.257601239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-49jkn,Uid:adf6ea14-3192-4e33-8562-1c912a463a9c,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:19.309848 kubelet[2763]: E0910 05:23:19.309813 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.509705 update_engine[1570]: I20250910 05:23:19.509634 1570 update_attempter.cc:509] Updating boot flags... Sep 10 05:23:19.536009 containerd[1592]: time="2025-09-10T05:23:19.535887600Z" level=info msg="connecting to shim 8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:19.538215 containerd[1592]: time="2025-09-10T05:23:19.538176700Z" level=info msg="connecting to shim 4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697" address="unix:///run/containerd/s/15d4b953d0cb359ffcd31949554cb388c0563adc2e76fd9369dd536560eeecdf" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:19.545598 containerd[1592]: time="2025-09-10T05:23:19.545312139Z" level=info msg="connecting to shim 8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d" address="unix:///run/containerd/s/11cf0e8854f473063ca142dae2592c6d476c89076043a3a7beaee641104af18f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:19.642935 kubelet[2763]: E0910 05:23:19.642361 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.697725 systemd[1]: Started cri-containerd-4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697.scope - libcontainer container 4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697. Sep 10 05:23:19.699318 systemd[1]: Started cri-containerd-8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d.scope - libcontainer container 8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d. Sep 10 05:23:19.701257 systemd[1]: Started cri-containerd-8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759.scope - libcontainer container 8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759. Sep 10 05:23:19.780065 containerd[1592]: time="2025-09-10T05:23:19.779940168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbczk,Uid:c86b65ba-f577-43eb-b2eb-5d52aeeeb089,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697\"" Sep 10 05:23:19.781766 kubelet[2763]: E0910 05:23:19.781728 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.786950 containerd[1592]: time="2025-09-10T05:23:19.786855950Z" level=info msg="CreateContainer within sandbox \"4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 05:23:19.788207 containerd[1592]: time="2025-09-10T05:23:19.788113520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmxnb,Uid:6b29174e-7e3e-438f-8c0a-fab5f153bb41,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\"" Sep 10 05:23:19.789596 kubelet[2763]: E0910 05:23:19.789559 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.799818 containerd[1592]: time="2025-09-10T05:23:19.799770802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-49jkn,Uid:adf6ea14-3192-4e33-8562-1c912a463a9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\"" Sep 10 05:23:19.800633 containerd[1592]: time="2025-09-10T05:23:19.799930095Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 05:23:19.800695 kubelet[2763]: E0910 05:23:19.800304 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:19.804770 containerd[1592]: time="2025-09-10T05:23:19.804713534Z" level=info msg="Container 256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:19.817195 containerd[1592]: time="2025-09-10T05:23:19.817136551Z" level=info msg="CreateContainer within sandbox \"4f6fea2a185a977101d83f5d559e84ab3cc5d32470c10e103aefa894d14c9697\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d\"" Sep 10 05:23:19.817628 containerd[1592]: time="2025-09-10T05:23:19.817568833Z" level=info msg="StartContainer for \"256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d\"" Sep 10 05:23:19.819100 containerd[1592]: time="2025-09-10T05:23:19.819027735Z" level=info msg="connecting to shim 256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d" address="unix:///run/containerd/s/15d4b953d0cb359ffcd31949554cb388c0563adc2e76fd9369dd536560eeecdf" protocol=ttrpc version=3 Sep 10 05:23:19.840711 systemd[1]: Started cri-containerd-256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d.scope - libcontainer container 256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d. Sep 10 05:23:19.883611 containerd[1592]: time="2025-09-10T05:23:19.882643132Z" level=info msg="StartContainer for \"256df507d028b8838c1112dc5c02fb068350563e60ce514e7e93ad3988339d2d\" returns successfully" Sep 10 05:23:20.648102 kubelet[2763]: E0910 05:23:20.647197 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:25.247782 kubelet[2763]: E0910 05:23:25.247723 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:25.256964 kubelet[2763]: I0910 05:23:25.256889 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fbczk" podStartSLOduration=7.25686092 podStartE2EDuration="7.25686092s" podCreationTimestamp="2025-09-10 05:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:20.657381428 +0000 UTC m=+7.132478877" watchObservedRunningTime="2025-09-10 05:23:25.25686092 +0000 UTC m=+11.731958369" Sep 10 05:23:25.282621 kubelet[2763]: E0910 05:23:25.282557 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:29.469491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345115367.mount: Deactivated successfully. Sep 10 05:23:33.648650 containerd[1592]: time="2025-09-10T05:23:33.648570841Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:33.649361 containerd[1592]: time="2025-09-10T05:23:33.649314974Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 10 05:23:33.650424 containerd[1592]: time="2025-09-10T05:23:33.650393858Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:33.651823 containerd[1592]: time="2025-09-10T05:23:33.651794028Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.85184126s" Sep 10 05:23:33.651866 containerd[1592]: time="2025-09-10T05:23:33.651832110Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 05:23:33.655757 containerd[1592]: time="2025-09-10T05:23:33.655727083Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 05:23:33.664273 containerd[1592]: time="2025-09-10T05:23:33.664222782Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 05:23:33.673351 containerd[1592]: time="2025-09-10T05:23:33.673312430Z" level=info msg="Container 463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:33.677185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322542155.mount: Deactivated successfully. Sep 10 05:23:33.681299 containerd[1592]: time="2025-09-10T05:23:33.681265245Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\"" Sep 10 05:23:33.681737 containerd[1592]: time="2025-09-10T05:23:33.681658727Z" level=info msg="StartContainer for \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\"" Sep 10 05:23:33.682472 containerd[1592]: time="2025-09-10T05:23:33.682448005Z" level=info msg="connecting to shim 463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" protocol=ttrpc version=3 Sep 10 05:23:33.736711 systemd[1]: Started cri-containerd-463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25.scope - libcontainer container 463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25. Sep 10 05:23:33.771562 containerd[1592]: time="2025-09-10T05:23:33.771506821Z" level=info msg="StartContainer for \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" returns successfully" Sep 10 05:23:33.782173 systemd[1]: cri-containerd-463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25.scope: Deactivated successfully. Sep 10 05:23:33.785286 containerd[1592]: time="2025-09-10T05:23:33.785019589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" id:\"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" pid:3201 exited_at:{seconds:1757481813 nanos:784494739}" Sep 10 05:23:33.785286 containerd[1592]: time="2025-09-10T05:23:33.785130498Z" level=info msg="received exit event container_id:\"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" id:\"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" pid:3201 exited_at:{seconds:1757481813 nanos:784494739}" Sep 10 05:23:33.803913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25-rootfs.mount: Deactivated successfully. Sep 10 05:23:34.675206 kubelet[2763]: E0910 05:23:34.675164 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:34.677003 containerd[1592]: time="2025-09-10T05:23:34.676957669Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 05:23:34.691609 containerd[1592]: time="2025-09-10T05:23:34.691548280Z" level=info msg="Container bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:34.701430 containerd[1592]: time="2025-09-10T05:23:34.701376224Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\"" Sep 10 05:23:34.702039 containerd[1592]: time="2025-09-10T05:23:34.701989109Z" level=info msg="StartContainer for \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\"" Sep 10 05:23:34.703115 containerd[1592]: time="2025-09-10T05:23:34.703079254Z" level=info msg="connecting to shim bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" protocol=ttrpc version=3 Sep 10 05:23:34.727737 systemd[1]: Started cri-containerd-bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79.scope - libcontainer container bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79. Sep 10 05:23:34.759377 containerd[1592]: time="2025-09-10T05:23:34.759339625Z" level=info msg="StartContainer for \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" returns successfully" Sep 10 05:23:34.772886 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 05:23:34.773133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:23:34.773304 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:23:34.775068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:23:34.777017 containerd[1592]: time="2025-09-10T05:23:34.776981076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" id:\"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" pid:3246 exited_at:{seconds:1757481814 nanos:776617359}" Sep 10 05:23:34.777050 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 05:23:34.777493 systemd[1]: cri-containerd-bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79.scope: Deactivated successfully. Sep 10 05:23:34.777557 containerd[1592]: time="2025-09-10T05:23:34.777519741Z" level=info msg="received exit event container_id:\"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" id:\"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" pid:3246 exited_at:{seconds:1757481814 nanos:776617359}" Sep 10 05:23:34.806313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:23:35.678118 kubelet[2763]: E0910 05:23:35.677983 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:35.681391 containerd[1592]: time="2025-09-10T05:23:35.681342634Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 05:23:35.690933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79-rootfs.mount: Deactivated successfully. Sep 10 05:23:35.695568 containerd[1592]: time="2025-09-10T05:23:35.694606285Z" level=info msg="Container 4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:35.703158 containerd[1592]: time="2025-09-10T05:23:35.703104929Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\"" Sep 10 05:23:35.703814 containerd[1592]: time="2025-09-10T05:23:35.703782044Z" level=info msg="StartContainer for \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\"" Sep 10 05:23:35.705061 containerd[1592]: time="2025-09-10T05:23:35.705036959Z" level=info msg="connecting to shim 4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" protocol=ttrpc version=3 Sep 10 05:23:35.726713 systemd[1]: Started cri-containerd-4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d.scope - libcontainer container 4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d. Sep 10 05:23:35.767415 systemd[1]: cri-containerd-4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d.scope: Deactivated successfully. Sep 10 05:23:35.769838 containerd[1592]: time="2025-09-10T05:23:35.769800378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" id:\"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" pid:3300 exited_at:{seconds:1757481815 nanos:769602305}" Sep 10 05:23:35.769838 containerd[1592]: time="2025-09-10T05:23:35.769801480Z" level=info msg="received exit event container_id:\"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" id:\"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" pid:3300 exited_at:{seconds:1757481815 nanos:769602305}" Sep 10 05:23:35.770536 containerd[1592]: time="2025-09-10T05:23:35.770508943Z" level=info msg="StartContainer for \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" returns successfully" Sep 10 05:23:35.789618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d-rootfs.mount: Deactivated successfully. Sep 10 05:23:36.217954 containerd[1592]: time="2025-09-10T05:23:36.217897266Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:36.218657 containerd[1592]: time="2025-09-10T05:23:36.218604809Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 10 05:23:36.219750 containerd[1592]: time="2025-09-10T05:23:36.219712345Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:23:36.220808 containerd[1592]: time="2025-09-10T05:23:36.220769286Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.565000604s" Sep 10 05:23:36.220808 containerd[1592]: time="2025-09-10T05:23:36.220799472Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 05:23:36.222738 containerd[1592]: time="2025-09-10T05:23:36.222707476Z" level=info msg="CreateContainer within sandbox \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 05:23:36.230605 containerd[1592]: time="2025-09-10T05:23:36.230554688Z" level=info msg="Container 9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:36.237655 containerd[1592]: time="2025-09-10T05:23:36.237623613Z" level=info msg="CreateContainer within sandbox \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\"" Sep 10 05:23:36.238109 containerd[1592]: time="2025-09-10T05:23:36.238070906Z" level=info msg="StartContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\"" Sep 10 05:23:36.238939 containerd[1592]: time="2025-09-10T05:23:36.238915347Z" level=info msg="connecting to shim 9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432" address="unix:///run/containerd/s/11cf0e8854f473063ca142dae2592c6d476c89076043a3a7beaee641104af18f" protocol=ttrpc version=3 Sep 10 05:23:36.261719 systemd[1]: Started cri-containerd-9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432.scope - libcontainer container 9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432. Sep 10 05:23:36.289529 containerd[1592]: time="2025-09-10T05:23:36.289493300Z" level=info msg="StartContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" returns successfully" Sep 10 05:23:36.680862 kubelet[2763]: E0910 05:23:36.680825 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:36.685238 kubelet[2763]: E0910 05:23:36.685203 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:36.687215 containerd[1592]: time="2025-09-10T05:23:36.687162659Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 05:23:36.811623 kubelet[2763]: I0910 05:23:36.811341 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-49jkn" podStartSLOduration=2.390793144 podStartE2EDuration="18.811317406s" podCreationTimestamp="2025-09-10 05:23:18 +0000 UTC" firstStartedPulling="2025-09-10 05:23:19.800874489 +0000 UTC m=+6.275971928" lastFinishedPulling="2025-09-10 05:23:36.221398741 +0000 UTC m=+22.696496190" observedRunningTime="2025-09-10 05:23:36.691763656 +0000 UTC m=+23.166861095" watchObservedRunningTime="2025-09-10 05:23:36.811317406 +0000 UTC m=+23.286414855" Sep 10 05:23:36.821861 containerd[1592]: time="2025-09-10T05:23:36.821813047Z" level=info msg="Container f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:36.826352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048037379.mount: Deactivated successfully. Sep 10 05:23:36.836222 containerd[1592]: time="2025-09-10T05:23:36.836173968Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\"" Sep 10 05:23:36.838199 containerd[1592]: time="2025-09-10T05:23:36.838164067Z" level=info msg="StartContainer for \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\"" Sep 10 05:23:36.839792 containerd[1592]: time="2025-09-10T05:23:36.839711772Z" level=info msg="connecting to shim f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" protocol=ttrpc version=3 Sep 10 05:23:36.888715 systemd[1]: Started cri-containerd-f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7.scope - libcontainer container f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7. Sep 10 05:23:36.929180 systemd[1]: cri-containerd-f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7.scope: Deactivated successfully. Sep 10 05:23:36.930270 containerd[1592]: time="2025-09-10T05:23:36.930237043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" id:\"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" pid:3383 exited_at:{seconds:1757481816 nanos:929359270}" Sep 10 05:23:36.931718 containerd[1592]: time="2025-09-10T05:23:36.931640377Z" level=info msg="received exit event container_id:\"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" id:\"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" pid:3383 exited_at:{seconds:1757481816 nanos:929359270}" Sep 10 05:23:36.933405 containerd[1592]: time="2025-09-10T05:23:36.933380113Z" level=info msg="StartContainer for \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" returns successfully" Sep 10 05:23:36.959280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7-rootfs.mount: Deactivated successfully. Sep 10 05:23:37.690510 kubelet[2763]: E0910 05:23:37.690477 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:37.691222 kubelet[2763]: E0910 05:23:37.690622 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:37.692174 containerd[1592]: time="2025-09-10T05:23:37.692127152Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 05:23:37.707388 containerd[1592]: time="2025-09-10T05:23:37.707341683Z" level=info msg="Container 297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:37.711707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504727032.mount: Deactivated successfully. Sep 10 05:23:37.717443 containerd[1592]: time="2025-09-10T05:23:37.717395986Z" level=info msg="CreateContainer within sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\"" Sep 10 05:23:37.718018 containerd[1592]: time="2025-09-10T05:23:37.717978083Z" level=info msg="StartContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\"" Sep 10 05:23:37.718932 containerd[1592]: time="2025-09-10T05:23:37.718905699Z" level=info msg="connecting to shim 297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3" address="unix:///run/containerd/s/6c19558172cc39fb0c201597115937429bc63a51e5b66fb478acb3553802a2b0" protocol=ttrpc version=3 Sep 10 05:23:37.736730 systemd[1]: Started cri-containerd-297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3.scope - libcontainer container 297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3. Sep 10 05:23:37.775364 containerd[1592]: time="2025-09-10T05:23:37.775326829Z" level=info msg="StartContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" returns successfully" Sep 10 05:23:37.845252 containerd[1592]: time="2025-09-10T05:23:37.845199711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" id:\"1534d90d6a472bcd622e77d7559df89adba175c1ee8f1165a94be154dfe3ef31\" pid:3451 exited_at:{seconds:1757481817 nanos:844791913}" Sep 10 05:23:37.857518 kubelet[2763]: I0910 05:23:37.857466 2763 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 05:23:37.889648 systemd[1]: Created slice kubepods-burstable-pod5f021ad6_19e4_4620_a173_d73b17c18ec6.slice - libcontainer container kubepods-burstable-pod5f021ad6_19e4_4620_a173_d73b17c18ec6.slice. Sep 10 05:23:37.895203 systemd[1]: Created slice kubepods-burstable-pod70f0563c_5cdf_4c42_a0a1_0a671a4feb6f.slice - libcontainer container kubepods-burstable-pod70f0563c_5cdf_4c42_a0a1_0a671a4feb6f.slice. Sep 10 05:23:38.060765 kubelet[2763]: I0910 05:23:38.060714 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70f0563c-5cdf-4c42-a0a1-0a671a4feb6f-config-volume\") pod \"coredns-7c65d6cfc9-ph2qk\" (UID: \"70f0563c-5cdf-4c42-a0a1-0a671a4feb6f\") " pod="kube-system/coredns-7c65d6cfc9-ph2qk" Sep 10 05:23:38.061053 kubelet[2763]: I0910 05:23:38.060848 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqc99\" (UniqueName: \"kubernetes.io/projected/5f021ad6-19e4-4620-a173-d73b17c18ec6-kube-api-access-nqc99\") pod \"coredns-7c65d6cfc9-rwhx5\" (UID: \"5f021ad6-19e4-4620-a173-d73b17c18ec6\") " pod="kube-system/coredns-7c65d6cfc9-rwhx5" Sep 10 05:23:38.061053 kubelet[2763]: I0910 05:23:38.060889 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwnr\" (UniqueName: \"kubernetes.io/projected/70f0563c-5cdf-4c42-a0a1-0a671a4feb6f-kube-api-access-5hwnr\") pod \"coredns-7c65d6cfc9-ph2qk\" (UID: \"70f0563c-5cdf-4c42-a0a1-0a671a4feb6f\") " pod="kube-system/coredns-7c65d6cfc9-ph2qk" Sep 10 05:23:38.061053 kubelet[2763]: I0910 05:23:38.060917 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f021ad6-19e4-4620-a173-d73b17c18ec6-config-volume\") pod \"coredns-7c65d6cfc9-rwhx5\" (UID: \"5f021ad6-19e4-4620-a173-d73b17c18ec6\") " pod="kube-system/coredns-7c65d6cfc9-rwhx5" Sep 10 05:23:38.494802 kubelet[2763]: E0910 05:23:38.494751 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:38.498789 kubelet[2763]: E0910 05:23:38.498478 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:38.499106 containerd[1592]: time="2025-09-10T05:23:38.499060720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ph2qk,Uid:70f0563c-5cdf-4c42-a0a1-0a671a4feb6f,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:38.507328 containerd[1592]: time="2025-09-10T05:23:38.507272541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwhx5,Uid:5f021ad6-19e4-4620-a173-d73b17c18ec6,Namespace:kube-system,Attempt:0,}" Sep 10 05:23:38.723380 kubelet[2763]: E0910 05:23:38.723332 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:38.743122 kubelet[2763]: I0910 05:23:38.743047 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmxnb" podStartSLOduration=6.886947357 podStartE2EDuration="20.743023522s" podCreationTimestamp="2025-09-10 05:23:18 +0000 UTC" firstStartedPulling="2025-09-10 05:23:19.799399816 +0000 UTC m=+6.274497265" lastFinishedPulling="2025-09-10 05:23:33.65547598 +0000 UTC m=+20.130573430" observedRunningTime="2025-09-10 05:23:38.740619886 +0000 UTC m=+25.215717335" watchObservedRunningTime="2025-09-10 05:23:38.743023522 +0000 UTC m=+25.218120971" Sep 10 05:23:39.725309 kubelet[2763]: E0910 05:23:39.725262 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:39.952012 systemd-networkd[1496]: cilium_host: Link UP Sep 10 05:23:39.952314 systemd-networkd[1496]: cilium_net: Link UP Sep 10 05:23:39.952601 systemd-networkd[1496]: cilium_net: Gained carrier Sep 10 05:23:39.952793 systemd-networkd[1496]: cilium_host: Gained carrier Sep 10 05:23:40.052240 systemd-networkd[1496]: cilium_vxlan: Link UP Sep 10 05:23:40.052253 systemd-networkd[1496]: cilium_vxlan: Gained carrier Sep 10 05:23:40.256622 kernel: NET: Registered PF_ALG protocol family Sep 10 05:23:40.528721 systemd-networkd[1496]: cilium_host: Gained IPv6LL Sep 10 05:23:40.592820 systemd-networkd[1496]: cilium_net: Gained IPv6LL Sep 10 05:23:40.727912 kubelet[2763]: E0910 05:23:40.727852 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:40.893400 systemd-networkd[1496]: lxc_health: Link UP Sep 10 05:23:40.894114 systemd-networkd[1496]: lxc_health: Gained carrier Sep 10 05:23:41.139471 systemd-networkd[1496]: lxc7199d520d06d: Link UP Sep 10 05:23:41.156617 kernel: eth0: renamed from tmp80e75 Sep 10 05:23:41.168615 kernel: eth0: renamed from tmpc1a10 Sep 10 05:23:41.180303 systemd-networkd[1496]: lxc6587ac895850: Link UP Sep 10 05:23:41.181120 systemd-networkd[1496]: lxc7199d520d06d: Gained carrier Sep 10 05:23:41.181572 systemd-networkd[1496]: lxc6587ac895850: Gained carrier Sep 10 05:23:41.552767 systemd-networkd[1496]: cilium_vxlan: Gained IPv6LL Sep 10 05:23:41.683794 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:41834.service - OpenSSH per-connection server daemon (10.0.0.1:41834). Sep 10 05:23:41.730856 kubelet[2763]: E0910 05:23:41.730821 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:41.739031 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 41834 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:41.741572 sshd-session[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:41.748798 systemd-logind[1569]: New session 10 of user core. Sep 10 05:23:41.757118 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 05:23:42.049134 sshd[3915]: Connection closed by 10.0.0.1 port 41834 Sep 10 05:23:42.049491 sshd-session[3911]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:42.054273 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:41834.service: Deactivated successfully. Sep 10 05:23:42.056334 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 05:23:42.057261 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Sep 10 05:23:42.058654 systemd-logind[1569]: Removed session 10. Sep 10 05:23:42.576764 systemd-networkd[1496]: lxc6587ac895850: Gained IPv6LL Sep 10 05:23:42.731014 kubelet[2763]: E0910 05:23:42.730979 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:42.768773 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 10 05:23:42.960794 systemd-networkd[1496]: lxc7199d520d06d: Gained IPv6LL Sep 10 05:23:44.506027 containerd[1592]: time="2025-09-10T05:23:44.505964273Z" level=info msg="connecting to shim c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe" address="unix:///run/containerd/s/9176d5cbb1c93c7767da6f190505e2e19538f76344cab6e4eaad8ff18d76ada0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:44.508112 containerd[1592]: time="2025-09-10T05:23:44.508062468Z" level=info msg="connecting to shim 80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e" address="unix:///run/containerd/s/f6ca6bccf58b24cc12968b6e94802147d5653b91217ed13b7dc93f60a61cd3ed" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:23:44.532953 systemd[1]: Started cri-containerd-80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e.scope - libcontainer container 80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e. Sep 10 05:23:44.538299 systemd[1]: Started cri-containerd-c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe.scope - libcontainer container c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe. Sep 10 05:23:44.550624 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:23:44.553172 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:23:44.588568 containerd[1592]: time="2025-09-10T05:23:44.588532818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwhx5,Uid:5f021ad6-19e4-4620-a173-d73b17c18ec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e\"" Sep 10 05:23:44.588698 containerd[1592]: time="2025-09-10T05:23:44.588627125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ph2qk,Uid:70f0563c-5cdf-4c42-a0a1-0a671a4feb6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe\"" Sep 10 05:23:44.589367 kubelet[2763]: E0910 05:23:44.589333 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:44.589757 kubelet[2763]: E0910 05:23:44.589333 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:44.590833 containerd[1592]: time="2025-09-10T05:23:44.590809218Z" level=info msg="CreateContainer within sandbox \"c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 05:23:44.592457 containerd[1592]: time="2025-09-10T05:23:44.592412894Z" level=info msg="CreateContainer within sandbox \"80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 05:23:44.614494 containerd[1592]: time="2025-09-10T05:23:44.614445232Z" level=info msg="Container b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:44.654602 containerd[1592]: time="2025-09-10T05:23:44.654560982Z" level=info msg="Container 0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:23:44.662285 containerd[1592]: time="2025-09-10T05:23:44.662248087Z" level=info msg="CreateContainer within sandbox \"c1a10b95041571a2dfc2a3fad309f83bebd87f9429387e3470893ee328f7cfbe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3\"" Sep 10 05:23:44.662888 containerd[1592]: time="2025-09-10T05:23:44.662844118Z" level=info msg="StartContainer for \"b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3\"" Sep 10 05:23:44.663834 containerd[1592]: time="2025-09-10T05:23:44.663808751Z" level=info msg="connecting to shim b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3" address="unix:///run/containerd/s/9176d5cbb1c93c7767da6f190505e2e19538f76344cab6e4eaad8ff18d76ada0" protocol=ttrpc version=3 Sep 10 05:23:44.667886 containerd[1592]: time="2025-09-10T05:23:44.667855239Z" level=info msg="CreateContainer within sandbox \"80e75f7a442aeff7608e4d558100b1b7825cb1b28d008261689ce9db8ebdf69e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc\"" Sep 10 05:23:44.669107 containerd[1592]: time="2025-09-10T05:23:44.668234342Z" level=info msg="StartContainer for \"0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc\"" Sep 10 05:23:44.669283 containerd[1592]: time="2025-09-10T05:23:44.669258699Z" level=info msg="connecting to shim 0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc" address="unix:///run/containerd/s/f6ca6bccf58b24cc12968b6e94802147d5653b91217ed13b7dc93f60a61cd3ed" protocol=ttrpc version=3 Sep 10 05:23:44.689766 systemd[1]: Started cri-containerd-b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3.scope - libcontainer container b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3. Sep 10 05:23:44.693157 systemd[1]: Started cri-containerd-0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc.scope - libcontainer container 0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc. Sep 10 05:23:44.740630 containerd[1592]: time="2025-09-10T05:23:44.740097910Z" level=info msg="StartContainer for \"b35cd0532d14d2c3c6ac30cb2e86097ff444b761b3f1f91a04952e0bbe9be1a3\" returns successfully" Sep 10 05:23:44.745606 containerd[1592]: time="2025-09-10T05:23:44.745445244Z" level=info msg="StartContainer for \"0fbf161347549638648c508bb1d80971dc4c45fad86ac00f7e19fea293ec19bc\" returns successfully" Sep 10 05:23:44.748507 kubelet[2763]: E0910 05:23:44.748334 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:44.758529 kubelet[2763]: I0910 05:23:44.758384 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ph2qk" podStartSLOduration=26.758366711 podStartE2EDuration="26.758366711s" podCreationTimestamp="2025-09-10 05:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:44.757862633 +0000 UTC m=+31.232960092" watchObservedRunningTime="2025-09-10 05:23:44.758366711 +0000 UTC m=+31.233464190" Sep 10 05:23:45.466481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499634235.mount: Deactivated successfully. Sep 10 05:23:45.750687 kubelet[2763]: E0910 05:23:45.750432 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:45.750687 kubelet[2763]: E0910 05:23:45.750453 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:45.773191 kubelet[2763]: I0910 05:23:45.773129 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rwhx5" podStartSLOduration=27.773098383 podStartE2EDuration="27.773098383s" podCreationTimestamp="2025-09-10 05:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:23:45.761237474 +0000 UTC m=+32.236334943" watchObservedRunningTime="2025-09-10 05:23:45.773098383 +0000 UTC m=+32.248195832" Sep 10 05:23:46.778393 kubelet[2763]: E0910 05:23:46.778327 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:47.072136 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:41846.service - OpenSSH per-connection server daemon (10.0.0.1:41846). Sep 10 05:23:47.124753 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 41846 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:47.126399 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:47.130685 systemd-logind[1569]: New session 11 of user core. Sep 10 05:23:47.143713 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 05:23:47.296082 sshd[4116]: Connection closed by 10.0.0.1 port 41846 Sep 10 05:23:47.296459 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:47.299796 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:41846.service: Deactivated successfully. Sep 10 05:23:47.301884 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 05:23:47.303969 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Sep 10 05:23:47.305093 systemd-logind[1569]: Removed session 11. Sep 10 05:23:48.495965 kubelet[2763]: E0910 05:23:48.495900 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:48.781754 kubelet[2763]: E0910 05:23:48.781628 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:23:52.308887 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:51928.service - OpenSSH per-connection server daemon (10.0.0.1:51928). Sep 10 05:23:52.362880 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 51928 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:52.364680 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:52.369325 systemd-logind[1569]: New session 12 of user core. Sep 10 05:23:52.376825 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 05:23:52.489022 sshd[4140]: Connection closed by 10.0.0.1 port 51928 Sep 10 05:23:52.489404 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:52.493134 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:51928.service: Deactivated successfully. Sep 10 05:23:52.495516 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 05:23:52.497572 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Sep 10 05:23:52.499192 systemd-logind[1569]: Removed session 12. Sep 10 05:23:57.507456 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:51940.service - OpenSSH per-connection server daemon (10.0.0.1:51940). Sep 10 05:23:57.575371 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 51940 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:57.576959 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:57.581524 systemd-logind[1569]: New session 13 of user core. Sep 10 05:23:57.588717 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 05:23:57.702658 sshd[4162]: Connection closed by 10.0.0.1 port 51940 Sep 10 05:23:57.703024 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:57.712702 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:51940.service: Deactivated successfully. Sep 10 05:23:57.714785 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 05:23:57.715852 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Sep 10 05:23:57.720231 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:51944.service - OpenSSH per-connection server daemon (10.0.0.1:51944). Sep 10 05:23:57.721238 systemd-logind[1569]: Removed session 13. Sep 10 05:23:57.785517 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 51944 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:57.787479 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:57.792856 systemd-logind[1569]: New session 14 of user core. Sep 10 05:23:57.803744 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 05:23:57.957183 sshd[4180]: Connection closed by 10.0.0.1 port 51944 Sep 10 05:23:57.957863 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:57.967488 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:51944.service: Deactivated successfully. Sep 10 05:23:57.969558 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 05:23:57.970761 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Sep 10 05:23:57.975430 systemd-logind[1569]: Removed session 14. Sep 10 05:23:57.978867 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:51956.service - OpenSSH per-connection server daemon (10.0.0.1:51956). Sep 10 05:23:58.041784 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 51956 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:23:58.043683 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:23:58.048362 systemd-logind[1569]: New session 15 of user core. Sep 10 05:23:58.056731 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 05:23:58.170990 sshd[4196]: Connection closed by 10.0.0.1 port 51956 Sep 10 05:23:58.171455 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 10 05:23:58.176496 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:51956.service: Deactivated successfully. Sep 10 05:23:58.178424 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 05:23:58.179343 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Sep 10 05:23:58.180717 systemd-logind[1569]: Removed session 15. Sep 10 05:24:03.194101 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:44228.service - OpenSSH per-connection server daemon (10.0.0.1:44228). Sep 10 05:24:03.255421 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 44228 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:03.256721 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:03.260678 systemd-logind[1569]: New session 16 of user core. Sep 10 05:24:03.270713 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 05:24:03.377752 sshd[4214]: Connection closed by 10.0.0.1 port 44228 Sep 10 05:24:03.378225 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:03.382159 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:44228.service: Deactivated successfully. Sep 10 05:24:03.383928 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 05:24:03.384727 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Sep 10 05:24:03.385716 systemd-logind[1569]: Removed session 16. Sep 10 05:24:08.394428 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:44242.service - OpenSSH per-connection server daemon (10.0.0.1:44242). Sep 10 05:24:08.459524 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:08.461305 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:08.466380 systemd-logind[1569]: New session 17 of user core. Sep 10 05:24:08.479849 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 05:24:08.599711 sshd[4231]: Connection closed by 10.0.0.1 port 44242 Sep 10 05:24:08.600267 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:08.609447 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:44242.service: Deactivated successfully. Sep 10 05:24:08.611383 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 05:24:08.612224 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Sep 10 05:24:08.615119 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:44246.service - OpenSSH per-connection server daemon (10.0.0.1:44246). Sep 10 05:24:08.615806 systemd-logind[1569]: Removed session 17. Sep 10 05:24:08.665771 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 44246 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:08.667317 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:08.672065 systemd-logind[1569]: New session 18 of user core. Sep 10 05:24:08.688835 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 05:24:08.886414 sshd[4247]: Connection closed by 10.0.0.1 port 44246 Sep 10 05:24:08.886931 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:08.899848 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:44246.service: Deactivated successfully. Sep 10 05:24:08.901829 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 05:24:08.902651 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Sep 10 05:24:08.905636 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Sep 10 05:24:08.906287 systemd-logind[1569]: Removed session 18. Sep 10 05:24:08.976256 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:08.978115 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:08.983049 systemd-logind[1569]: New session 19 of user core. Sep 10 05:24:08.994740 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 05:24:10.495123 sshd[4262]: Connection closed by 10.0.0.1 port 44258 Sep 10 05:24:10.495444 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:10.506327 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:44258.service: Deactivated successfully. Sep 10 05:24:10.508809 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 05:24:10.509659 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Sep 10 05:24:10.513071 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:46588.service - OpenSSH per-connection server daemon (10.0.0.1:46588). Sep 10 05:24:10.514209 systemd-logind[1569]: Removed session 19. Sep 10 05:24:10.569622 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 46588 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:10.571116 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:10.575607 systemd-logind[1569]: New session 20 of user core. Sep 10 05:24:10.586707 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 05:24:10.805243 sshd[4285]: Connection closed by 10.0.0.1 port 46588 Sep 10 05:24:10.805847 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:10.815312 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:46588.service: Deactivated successfully. Sep 10 05:24:10.817355 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 05:24:10.818437 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Sep 10 05:24:10.821279 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:46598.service - OpenSSH per-connection server daemon (10.0.0.1:46598). Sep 10 05:24:10.821966 systemd-logind[1569]: Removed session 20. Sep 10 05:24:10.883658 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:10.885415 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:10.890877 systemd-logind[1569]: New session 21 of user core. Sep 10 05:24:10.900720 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 05:24:11.013172 sshd[4299]: Connection closed by 10.0.0.1 port 46598 Sep 10 05:24:11.013508 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:11.017280 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:46598.service: Deactivated successfully. Sep 10 05:24:11.019162 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 05:24:11.020032 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Sep 10 05:24:11.021143 systemd-logind[1569]: Removed session 21. Sep 10 05:24:16.025960 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:46606.service - OpenSSH per-connection server daemon (10.0.0.1:46606). Sep 10 05:24:16.082596 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:16.084464 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:16.089667 systemd-logind[1569]: New session 22 of user core. Sep 10 05:24:16.096745 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 05:24:16.203965 sshd[4317]: Connection closed by 10.0.0.1 port 46606 Sep 10 05:24:16.204302 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:16.208042 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:46606.service: Deactivated successfully. Sep 10 05:24:16.210065 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 05:24:16.210868 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Sep 10 05:24:16.212011 systemd-logind[1569]: Removed session 22. Sep 10 05:24:21.228510 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:37416.service - OpenSSH per-connection server daemon (10.0.0.1:37416). Sep 10 05:24:21.279242 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 37416 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:21.280883 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:21.285405 systemd-logind[1569]: New session 23 of user core. Sep 10 05:24:21.295786 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 05:24:21.413224 sshd[4340]: Connection closed by 10.0.0.1 port 37416 Sep 10 05:24:21.413674 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:21.418978 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:37416.service: Deactivated successfully. Sep 10 05:24:21.421324 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 05:24:21.422272 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Sep 10 05:24:21.424075 systemd-logind[1569]: Removed session 23. Sep 10 05:24:23.623799 kubelet[2763]: E0910 05:24:23.623759 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:26.425383 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:37424.service - OpenSSH per-connection server daemon (10.0.0.1:37424). Sep 10 05:24:26.485869 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 37424 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:26.487712 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:26.492258 systemd-logind[1569]: New session 24 of user core. Sep 10 05:24:26.504759 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 05:24:26.609700 sshd[4356]: Connection closed by 10.0.0.1 port 37424 Sep 10 05:24:26.610060 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:26.614244 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:37424.service: Deactivated successfully. Sep 10 05:24:26.616045 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 05:24:26.616754 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Sep 10 05:24:26.617673 systemd-logind[1569]: Removed session 24. Sep 10 05:24:31.621780 kubelet[2763]: E0910 05:24:31.621693 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:31.621780 kubelet[2763]: E0910 05:24:31.621750 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:31.627914 systemd[1]: Started sshd@24-10.0.0.44:22-10.0.0.1:45042.service - OpenSSH per-connection server daemon (10.0.0.1:45042). Sep 10 05:24:31.680599 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 45042 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:31.681872 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:31.685817 systemd-logind[1569]: New session 25 of user core. Sep 10 05:24:31.692709 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 05:24:31.796871 sshd[4372]: Connection closed by 10.0.0.1 port 45042 Sep 10 05:24:31.797303 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:31.809964 systemd[1]: sshd@24-10.0.0.44:22-10.0.0.1:45042.service: Deactivated successfully. Sep 10 05:24:31.811827 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 05:24:31.812694 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Sep 10 05:24:31.815302 systemd[1]: Started sshd@25-10.0.0.44:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). Sep 10 05:24:31.816388 systemd-logind[1569]: Removed session 25. Sep 10 05:24:31.865628 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:31.866934 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:31.870940 systemd-logind[1569]: New session 26 of user core. Sep 10 05:24:31.886717 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 05:24:33.250535 containerd[1592]: time="2025-09-10T05:24:33.250490342Z" level=info msg="StopContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" with timeout 30 (s)" Sep 10 05:24:33.258198 containerd[1592]: time="2025-09-10T05:24:33.258162736Z" level=info msg="Stop container \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" with signal terminated" Sep 10 05:24:33.269813 systemd[1]: cri-containerd-9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432.scope: Deactivated successfully. Sep 10 05:24:33.271488 containerd[1592]: time="2025-09-10T05:24:33.271445444Z" level=info msg="received exit event container_id:\"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" id:\"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" pid:3348 exited_at:{seconds:1757481873 nanos:270490840}" Sep 10 05:24:33.271884 containerd[1592]: time="2025-09-10T05:24:33.271856059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" id:\"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" pid:3348 exited_at:{seconds:1757481873 nanos:270490840}" Sep 10 05:24:33.280106 containerd[1592]: time="2025-09-10T05:24:33.280064608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" id:\"3d616a955e14757a4e1b021e29b0987d202b09f83fc5cd8ea504bb781fdd0bf1\" pid:4411 exited_at:{seconds:1757481873 nanos:279794913}" Sep 10 05:24:33.280783 containerd[1592]: time="2025-09-10T05:24:33.280750949Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 05:24:33.284607 containerd[1592]: time="2025-09-10T05:24:33.284568957Z" level=info msg="StopContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" with timeout 2 (s)" Sep 10 05:24:33.284922 containerd[1592]: time="2025-09-10T05:24:33.284887646Z" level=info msg="Stop container \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" with signal terminated" Sep 10 05:24:33.291317 systemd-networkd[1496]: lxc_health: Link DOWN Sep 10 05:24:33.291329 systemd-networkd[1496]: lxc_health: Lost carrier Sep 10 05:24:33.296959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432-rootfs.mount: Deactivated successfully. Sep 10 05:24:33.313228 systemd[1]: cri-containerd-297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3.scope: Deactivated successfully. Sep 10 05:24:33.313733 systemd[1]: cri-containerd-297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3.scope: Consumed 6.339s CPU time, 124M memory peak, 236K read from disk, 13.3M written to disk. Sep 10 05:24:33.315003 containerd[1592]: time="2025-09-10T05:24:33.314013055Z" level=info msg="received exit event container_id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" pid:3419 exited_at:{seconds:1757481873 nanos:313702461}" Sep 10 05:24:33.315003 containerd[1592]: time="2025-09-10T05:24:33.314030729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" id:\"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" pid:3419 exited_at:{seconds:1757481873 nanos:313702461}" Sep 10 05:24:33.321507 containerd[1592]: time="2025-09-10T05:24:33.321476650Z" level=info msg="StopContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" returns successfully" Sep 10 05:24:33.324110 containerd[1592]: time="2025-09-10T05:24:33.324067232Z" level=info msg="StopPodSandbox for \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\"" Sep 10 05:24:33.324246 containerd[1592]: time="2025-09-10T05:24:33.324143357Z" level=info msg="Container to stop \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.334276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3-rootfs.mount: Deactivated successfully. Sep 10 05:24:33.335198 systemd[1]: cri-containerd-8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d.scope: Deactivated successfully. Sep 10 05:24:33.335705 containerd[1592]: time="2025-09-10T05:24:33.335571019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" id:\"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" pid:2959 exit_status:137 exited_at:{seconds:1757481873 nanos:335023492}" Sep 10 05:24:33.347893 containerd[1592]: time="2025-09-10T05:24:33.347847704Z" level=info msg="StopContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" returns successfully" Sep 10 05:24:33.350084 containerd[1592]: time="2025-09-10T05:24:33.349987995Z" level=info msg="StopPodSandbox for \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\"" Sep 10 05:24:33.350084 containerd[1592]: time="2025-09-10T05:24:33.350077836Z" level=info msg="Container to stop \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.350156 containerd[1592]: time="2025-09-10T05:24:33.350090370Z" level=info msg="Container to stop \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.350156 containerd[1592]: time="2025-09-10T05:24:33.350099308Z" level=info msg="Container to stop \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.350156 containerd[1592]: time="2025-09-10T05:24:33.350111420Z" level=info msg="Container to stop \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.350156 containerd[1592]: time="2025-09-10T05:24:33.350120048Z" level=info msg="Container to stop \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 05:24:33.356943 systemd[1]: cri-containerd-8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759.scope: Deactivated successfully. Sep 10 05:24:33.366734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d-rootfs.mount: Deactivated successfully. Sep 10 05:24:33.371461 containerd[1592]: time="2025-09-10T05:24:33.371381936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" id:\"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" pid:2973 exit_status:137 exited_at:{seconds:1757481873 nanos:357267508}" Sep 10 05:24:33.371710 containerd[1592]: time="2025-09-10T05:24:33.371683903Z" level=info msg="shim disconnected" id=8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d namespace=k8s.io Sep 10 05:24:33.371797 containerd[1592]: time="2025-09-10T05:24:33.371779315Z" level=warning msg="cleaning up after shim disconnected" id=8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d namespace=k8s.io Sep 10 05:24:33.373254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d-shm.mount: Deactivated successfully. Sep 10 05:24:33.386591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759-rootfs.mount: Deactivated successfully. Sep 10 05:24:33.395925 containerd[1592]: time="2025-09-10T05:24:33.371838508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 05:24:33.396113 containerd[1592]: time="2025-09-10T05:24:33.377931985Z" level=info msg="TearDown network for sandbox \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" successfully" Sep 10 05:24:33.396113 containerd[1592]: time="2025-09-10T05:24:33.395992054Z" level=info msg="StopPodSandbox for \"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" returns successfully" Sep 10 05:24:33.396113 containerd[1592]: time="2025-09-10T05:24:33.387925566Z" level=info msg="received exit event sandbox_id:\"8d45a3160d19d0e3c567746f8695f042647d00554de72383c373de6f1adea54d\" exit_status:137 exited_at:{seconds:1757481873 nanos:335023492}" Sep 10 05:24:33.473011 containerd[1592]: time="2025-09-10T05:24:33.472957440Z" level=info msg="received exit event sandbox_id:\"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" exit_status:137 exited_at:{seconds:1757481873 nanos:357267508}" Sep 10 05:24:33.473321 containerd[1592]: time="2025-09-10T05:24:33.473232386Z" level=info msg="shim disconnected" id=8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759 namespace=k8s.io Sep 10 05:24:33.473321 containerd[1592]: time="2025-09-10T05:24:33.473266721Z" level=warning msg="cleaning up after shim disconnected" id=8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759 namespace=k8s.io Sep 10 05:24:33.473321 containerd[1592]: time="2025-09-10T05:24:33.473276780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 05:24:33.473819 containerd[1592]: time="2025-09-10T05:24:33.473785423Z" level=info msg="TearDown network for sandbox \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" successfully" Sep 10 05:24:33.473905 containerd[1592]: time="2025-09-10T05:24:33.473820259Z" level=info msg="StopPodSandbox for \"8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759\" returns successfully" Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.570889 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cni-path\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.570930 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-bpf-maps\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.570964 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adf6ea14-3192-4e33-8562-1c912a463a9c-cilium-config-path\") pod \"adf6ea14-3192-4e33-8562-1c912a463a9c\" (UID: \"adf6ea14-3192-4e33-8562-1c912a463a9c\") " Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.570984 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-net\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.571007 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q2tm\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-kube-api-access-2q2tm\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571165 kubelet[2763]: I0910 05:24:33.571022 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-lib-modules\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571038 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b29174e-7e3e-438f-8c0a-fab5f153bb41-clustermesh-secrets\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571054 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-kernel\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571069 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-cgroup\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571083 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hubble-tls\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571096 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnx92\" (UniqueName: \"kubernetes.io/projected/adf6ea14-3192-4e33-8562-1c912a463a9c-kube-api-access-qnx92\") pod \"adf6ea14-3192-4e33-8562-1c912a463a9c\" (UID: \"adf6ea14-3192-4e33-8562-1c912a463a9c\") " Sep 10 05:24:33.571964 kubelet[2763]: I0910 05:24:33.571111 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-config-path\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571126 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hostproc\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571140 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-etc-cni-netd\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571154 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-xtables-lock\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571168 2763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-run\") pod \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\" (UID: \"6b29174e-7e3e-438f-8c0a-fab5f153bb41\") " Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571036 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cni-path" (OuterVolumeSpecName: "cni-path") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572100 kubelet[2763]: I0910 05:24:33.571057 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572774 kubelet[2763]: I0910 05:24:33.571212 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572774 kubelet[2763]: I0910 05:24:33.571234 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572774 kubelet[2763]: I0910 05:24:33.571290 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572774 kubelet[2763]: I0910 05:24:33.571342 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572774 kubelet[2763]: I0910 05:24:33.572209 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572895 kubelet[2763]: I0910 05:24:33.572305 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hostproc" (OuterVolumeSpecName: "hostproc") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572895 kubelet[2763]: I0910 05:24:33.572548 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.572895 kubelet[2763]: I0910 05:24:33.572596 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 05:24:33.575624 kubelet[2763]: I0910 05:24:33.575312 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b29174e-7e3e-438f-8c0a-fab5f153bb41-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 05:24:33.575957 kubelet[2763]: I0910 05:24:33.575923 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-kube-api-access-2q2tm" (OuterVolumeSpecName: "kube-api-access-2q2tm") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "kube-api-access-2q2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 05:24:33.576848 kubelet[2763]: I0910 05:24:33.576817 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf6ea14-3192-4e33-8562-1c912a463a9c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "adf6ea14-3192-4e33-8562-1c912a463a9c" (UID: "adf6ea14-3192-4e33-8562-1c912a463a9c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 05:24:33.577284 kubelet[2763]: I0910 05:24:33.577255 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 05:24:33.577684 kubelet[2763]: I0910 05:24:33.577649 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf6ea14-3192-4e33-8562-1c912a463a9c-kube-api-access-qnx92" (OuterVolumeSpecName: "kube-api-access-qnx92") pod "adf6ea14-3192-4e33-8562-1c912a463a9c" (UID: "adf6ea14-3192-4e33-8562-1c912a463a9c"). InnerVolumeSpecName "kube-api-access-qnx92". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 05:24:33.578350 kubelet[2763]: I0910 05:24:33.578319 2763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b29174e-7e3e-438f-8c0a-fab5f153bb41" (UID: "6b29174e-7e3e-438f-8c0a-fab5f153bb41"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 05:24:33.630970 systemd[1]: Removed slice kubepods-burstable-pod6b29174e_7e3e_438f_8c0a_fab5f153bb41.slice - libcontainer container kubepods-burstable-pod6b29174e_7e3e_438f_8c0a_fab5f153bb41.slice. Sep 10 05:24:33.631059 systemd[1]: kubepods-burstable-pod6b29174e_7e3e_438f_8c0a_fab5f153bb41.slice: Consumed 6.445s CPU time, 124.3M memory peak, 244K read from disk, 13.3M written to disk. Sep 10 05:24:33.632721 systemd[1]: Removed slice kubepods-besteffort-podadf6ea14_3192_4e33_8562_1c912a463a9c.slice - libcontainer container kubepods-besteffort-podadf6ea14_3192_4e33_8562_1c912a463a9c.slice. Sep 10 05:24:33.672222 kubelet[2763]: I0910 05:24:33.672169 2763 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672222 kubelet[2763]: I0910 05:24:33.672200 2763 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672222 kubelet[2763]: I0910 05:24:33.672212 2763 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672237 2763 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672248 2763 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672260 2763 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adf6ea14-3192-4e33-8562-1c912a463a9c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672271 2763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q2tm\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-kube-api-access-2q2tm\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672283 2763 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672293 2763 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672305 2763 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672399 kubelet[2763]: I0910 05:24:33.672315 2763 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b29174e-7e3e-438f-8c0a-fab5f153bb41-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672604 kubelet[2763]: I0910 05:24:33.672326 2763 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b29174e-7e3e-438f-8c0a-fab5f153bb41-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672604 kubelet[2763]: I0910 05:24:33.672336 2763 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672604 kubelet[2763]: I0910 05:24:33.672346 2763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnx92\" (UniqueName: \"kubernetes.io/projected/adf6ea14-3192-4e33-8562-1c912a463a9c-kube-api-access-qnx92\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672604 kubelet[2763]: I0910 05:24:33.672357 2763 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.672604 kubelet[2763]: I0910 05:24:33.672367 2763 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b29174e-7e3e-438f-8c0a-fab5f153bb41-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 05:24:33.679758 kubelet[2763]: E0910 05:24:33.679727 2763 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 05:24:33.868806 kubelet[2763]: I0910 05:24:33.868694 2763 scope.go:117] "RemoveContainer" containerID="9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432" Sep 10 05:24:33.871613 containerd[1592]: time="2025-09-10T05:24:33.871549982Z" level=info msg="RemoveContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\"" Sep 10 05:24:33.878929 containerd[1592]: time="2025-09-10T05:24:33.878879492Z" level=info msg="RemoveContainer for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" returns successfully" Sep 10 05:24:33.879517 kubelet[2763]: I0910 05:24:33.879489 2763 scope.go:117] "RemoveContainer" containerID="9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432" Sep 10 05:24:33.879790 containerd[1592]: time="2025-09-10T05:24:33.879727863Z" level=error msg="ContainerStatus for \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\": not found" Sep 10 05:24:33.880659 kubelet[2763]: E0910 05:24:33.880384 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\": not found" containerID="9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432" Sep 10 05:24:33.880659 kubelet[2763]: I0910 05:24:33.880430 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432"} err="failed to get container status \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\": rpc error: code = NotFound desc = an error occurred when try to find container \"9617a5dd38c2dd0c7b233f6b97d6bba67b54e8b8417accb34db511919243a432\": not found" Sep 10 05:24:33.880659 kubelet[2763]: I0910 05:24:33.880518 2763 scope.go:117] "RemoveContainer" containerID="297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3" Sep 10 05:24:33.882938 containerd[1592]: time="2025-09-10T05:24:33.882897661Z" level=info msg="RemoveContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\"" Sep 10 05:24:33.888083 containerd[1592]: time="2025-09-10T05:24:33.888044729Z" level=info msg="RemoveContainer for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" returns successfully" Sep 10 05:24:33.888272 kubelet[2763]: I0910 05:24:33.888244 2763 scope.go:117] "RemoveContainer" containerID="f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7" Sep 10 05:24:33.890651 containerd[1592]: time="2025-09-10T05:24:33.890611064Z" level=info msg="RemoveContainer for \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\"" Sep 10 05:24:33.895197 containerd[1592]: time="2025-09-10T05:24:33.895156081Z" level=info msg="RemoveContainer for \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" returns successfully" Sep 10 05:24:33.895406 kubelet[2763]: I0910 05:24:33.895376 2763 scope.go:117] "RemoveContainer" containerID="4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d" Sep 10 05:24:33.897433 containerd[1592]: time="2025-09-10T05:24:33.897397906Z" level=info msg="RemoveContainer for \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\"" Sep 10 05:24:33.908271 containerd[1592]: time="2025-09-10T05:24:33.908240260Z" level=info msg="RemoveContainer for \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" returns successfully" Sep 10 05:24:33.908464 kubelet[2763]: I0910 05:24:33.908433 2763 scope.go:117] "RemoveContainer" containerID="bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79" Sep 10 05:24:33.909904 containerd[1592]: time="2025-09-10T05:24:33.909877399Z" level=info msg="RemoveContainer for \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\"" Sep 10 05:24:33.913302 containerd[1592]: time="2025-09-10T05:24:33.913260054Z" level=info msg="RemoveContainer for \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" returns successfully" Sep 10 05:24:33.913466 kubelet[2763]: I0910 05:24:33.913376 2763 scope.go:117] "RemoveContainer" containerID="463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25" Sep 10 05:24:33.914517 containerd[1592]: time="2025-09-10T05:24:33.914494172Z" level=info msg="RemoveContainer for \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\"" Sep 10 05:24:33.917688 containerd[1592]: time="2025-09-10T05:24:33.917663730Z" level=info msg="RemoveContainer for \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" returns successfully" Sep 10 05:24:33.917891 kubelet[2763]: I0910 05:24:33.917859 2763 scope.go:117] "RemoveContainer" containerID="297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3" Sep 10 05:24:33.918103 containerd[1592]: time="2025-09-10T05:24:33.918067132Z" level=error msg="ContainerStatus for \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\": not found" Sep 10 05:24:33.918270 kubelet[2763]: E0910 05:24:33.918230 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\": not found" containerID="297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3" Sep 10 05:24:33.918316 kubelet[2763]: I0910 05:24:33.918270 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3"} err="failed to get container status \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"297fadc0ae5f21aa956817d720e9967ace0a05af6de7d0be7b72801b5750c7d3\": not found" Sep 10 05:24:33.918316 kubelet[2763]: I0910 05:24:33.918297 2763 scope.go:117] "RemoveContainer" containerID="f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7" Sep 10 05:24:33.918489 containerd[1592]: time="2025-09-10T05:24:33.918462417Z" level=error msg="ContainerStatus for \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\": not found" Sep 10 05:24:33.918569 kubelet[2763]: E0910 05:24:33.918542 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\": not found" containerID="f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7" Sep 10 05:24:33.918569 kubelet[2763]: I0910 05:24:33.918559 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7"} err="failed to get container status \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f11ad0def8d339d119bffd163fc4784a1ad93c95c4f6b0f9e577527e49b5b3c7\": not found" Sep 10 05:24:33.918569 kubelet[2763]: I0910 05:24:33.918569 2763 scope.go:117] "RemoveContainer" containerID="4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d" Sep 10 05:24:33.918845 containerd[1592]: time="2025-09-10T05:24:33.918802096Z" level=error msg="ContainerStatus for \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\": not found" Sep 10 05:24:33.918950 kubelet[2763]: E0910 05:24:33.918927 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\": not found" containerID="4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d" Sep 10 05:24:33.918950 kubelet[2763]: I0910 05:24:33.918945 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d"} err="failed to get container status \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4092499c6c45b61f8913068238c541b7bab61bf36265e879499bf82545989c2d\": not found" Sep 10 05:24:33.919072 kubelet[2763]: I0910 05:24:33.918957 2763 scope.go:117] "RemoveContainer" containerID="bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79" Sep 10 05:24:33.919114 containerd[1592]: time="2025-09-10T05:24:33.919085168Z" level=error msg="ContainerStatus for \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\": not found" Sep 10 05:24:33.919236 kubelet[2763]: E0910 05:24:33.919204 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\": not found" containerID="bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79" Sep 10 05:24:33.919236 kubelet[2763]: I0910 05:24:33.919237 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79"} err="failed to get container status \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc20b93e3e24041d1b3f7359216fa9fc334823ac70752a446d92de64fbf92e79\": not found" Sep 10 05:24:33.919305 kubelet[2763]: I0910 05:24:33.919253 2763 scope.go:117] "RemoveContainer" containerID="463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25" Sep 10 05:24:33.919434 containerd[1592]: time="2025-09-10T05:24:33.919404097Z" level=error msg="ContainerStatus for \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\": not found" Sep 10 05:24:33.919528 kubelet[2763]: E0910 05:24:33.919506 2763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\": not found" containerID="463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25" Sep 10 05:24:33.919561 kubelet[2763]: I0910 05:24:33.919526 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25"} err="failed to get container status \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\": rpc error: code = NotFound desc = an error occurred when try to find container \"463353539d7a199a2de88cdcc075d98ac6fd77bba4610d6b74718cba09e3fa25\": not found" Sep 10 05:24:34.297169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e037d03f4473158ff40a57fb9ab198f2c2e0ca62de84379f443698a66c16759-shm.mount: Deactivated successfully. Sep 10 05:24:34.297303 systemd[1]: var-lib-kubelet-pods-adf6ea14\x2d3192\x2d4e33\x2d8562\x2d1c912a463a9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnx92.mount: Deactivated successfully. Sep 10 05:24:34.297402 systemd[1]: var-lib-kubelet-pods-6b29174e\x2d7e3e\x2d438f\x2d8c0a\x2dfab5f153bb41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2q2tm.mount: Deactivated successfully. Sep 10 05:24:34.297494 systemd[1]: var-lib-kubelet-pods-6b29174e\x2d7e3e\x2d438f\x2d8c0a\x2dfab5f153bb41-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 05:24:34.297611 systemd[1]: var-lib-kubelet-pods-6b29174e\x2d7e3e\x2d438f\x2d8c0a\x2dfab5f153bb41-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 05:24:34.627960 kubelet[2763]: I0910 05:24:34.627832 2763 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T05:24:34Z","lastTransitionTime":"2025-09-10T05:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 05:24:35.218275 sshd[4388]: Connection closed by 10.0.0.1 port 45048 Sep 10 05:24:35.218796 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:35.231755 systemd[1]: sshd@25-10.0.0.44:22-10.0.0.1:45048.service: Deactivated successfully. Sep 10 05:24:35.233486 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 05:24:35.234166 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Sep 10 05:24:35.237000 systemd[1]: Started sshd@26-10.0.0.44:22-10.0.0.1:45052.service - OpenSSH per-connection server daemon (10.0.0.1:45052). Sep 10 05:24:35.237892 systemd-logind[1569]: Removed session 26. Sep 10 05:24:35.290422 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 45052 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:35.292069 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:35.296368 systemd-logind[1569]: New session 27 of user core. Sep 10 05:24:35.302694 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 05:24:35.622625 kubelet[2763]: E0910 05:24:35.621827 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:35.624899 kubelet[2763]: I0910 05:24:35.624866 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" path="/var/lib/kubelet/pods/6b29174e-7e3e-438f-8c0a-fab5f153bb41/volumes" Sep 10 05:24:35.626023 kubelet[2763]: I0910 05:24:35.625994 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf6ea14-3192-4e33-8562-1c912a463a9c" path="/var/lib/kubelet/pods/adf6ea14-3192-4e33-8562-1c912a463a9c/volumes" Sep 10 05:24:35.693264 sshd[4545]: Connection closed by 10.0.0.1 port 45052 Sep 10 05:24:35.693793 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:35.704415 systemd[1]: sshd@26-10.0.0.44:22-10.0.0.1:45052.service: Deactivated successfully. Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707452 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="mount-cgroup" Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707478 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="apply-sysctl-overwrites" Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707486 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="clean-cilium-state" Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707493 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="cilium-agent" Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707500 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="mount-bpf-fs" Sep 10 05:24:35.707503 kubelet[2763]: E0910 05:24:35.707507 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adf6ea14-3192-4e33-8562-1c912a463a9c" containerName="cilium-operator" Sep 10 05:24:35.708355 kubelet[2763]: I0910 05:24:35.707533 2763 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf6ea14-3192-4e33-8562-1c912a463a9c" containerName="cilium-operator" Sep 10 05:24:35.708355 kubelet[2763]: I0910 05:24:35.707544 2763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b29174e-7e3e-438f-8c0a-fab5f153bb41" containerName="cilium-agent" Sep 10 05:24:35.707778 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 05:24:35.709475 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Sep 10 05:24:35.714881 systemd[1]: Started sshd@27-10.0.0.44:22-10.0.0.1:45064.service - OpenSSH per-connection server daemon (10.0.0.1:45064). Sep 10 05:24:35.716818 systemd-logind[1569]: Removed session 27. Sep 10 05:24:35.723101 systemd[1]: Created slice kubepods-burstable-pod2f8ad9c6_7428_43f3_bab9_22b0e28daf01.slice - libcontainer container kubepods-burstable-pod2f8ad9c6_7428_43f3_bab9_22b0e28daf01.slice. Sep 10 05:24:35.776281 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 45064 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:35.778216 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:35.782504 systemd-logind[1569]: New session 28 of user core. Sep 10 05:24:35.797724 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 10 05:24:35.848821 sshd[4560]: Connection closed by 10.0.0.1 port 45064 Sep 10 05:24:35.849121 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:35.861250 systemd[1]: sshd@27-10.0.0.44:22-10.0.0.1:45064.service: Deactivated successfully. Sep 10 05:24:35.864223 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 05:24:35.865308 systemd-logind[1569]: Session 28 logged out. Waiting for processes to exit. Sep 10 05:24:35.868535 systemd[1]: Started sshd@28-10.0.0.44:22-10.0.0.1:45072.service - OpenSSH per-connection server daemon (10.0.0.1:45072). Sep 10 05:24:35.869103 systemd-logind[1569]: Removed session 28. Sep 10 05:24:35.882144 kubelet[2763]: I0910 05:24:35.882074 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-lib-modules\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882144 kubelet[2763]: I0910 05:24:35.882101 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-cilium-run\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882144 kubelet[2763]: I0910 05:24:35.882120 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-host-proc-sys-net\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882144 kubelet[2763]: I0910 05:24:35.882137 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-xtables-lock\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882151 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-cilium-config-path\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882165 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-bpf-maps\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882178 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-hostproc\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882193 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-cni-path\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882205 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-clustermesh-secrets\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882293 kubelet[2763]: I0910 05:24:35.882221 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-hubble-tls\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882417 kubelet[2763]: I0910 05:24:35.882263 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-cilium-cgroup\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882417 kubelet[2763]: I0910 05:24:35.882287 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn4nz\" (UniqueName: \"kubernetes.io/projected/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-kube-api-access-qn4nz\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882417 kubelet[2763]: I0910 05:24:35.882327 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-cilium-ipsec-secrets\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882417 kubelet[2763]: I0910 05:24:35.882389 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-etc-cni-netd\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.882417 kubelet[2763]: I0910 05:24:35.882407 2763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f8ad9c6-7428-43f3-bab9-22b0e28daf01-host-proc-sys-kernel\") pod \"cilium-gx9qn\" (UID: \"2f8ad9c6-7428-43f3-bab9-22b0e28daf01\") " pod="kube-system/cilium-gx9qn" Sep 10 05:24:35.931101 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 45072 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:24:35.932652 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:24:35.937507 systemd-logind[1569]: New session 29 of user core. Sep 10 05:24:35.948860 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 10 05:24:36.032864 kubelet[2763]: E0910 05:24:36.032828 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:36.033454 containerd[1592]: time="2025-09-10T05:24:36.033402239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gx9qn,Uid:2f8ad9c6-7428-43f3-bab9-22b0e28daf01,Namespace:kube-system,Attempt:0,}" Sep 10 05:24:36.058775 containerd[1592]: time="2025-09-10T05:24:36.058708803Z" level=info msg="connecting to shim c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:24:36.087827 systemd[1]: Started cri-containerd-c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909.scope - libcontainer container c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909. Sep 10 05:24:36.113826 containerd[1592]: time="2025-09-10T05:24:36.113760491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gx9qn,Uid:2f8ad9c6-7428-43f3-bab9-22b0e28daf01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\"" Sep 10 05:24:36.114847 kubelet[2763]: E0910 05:24:36.114820 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:36.117276 containerd[1592]: time="2025-09-10T05:24:36.117222960Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 05:24:36.141479 containerd[1592]: time="2025-09-10T05:24:36.141381281Z" level=info msg="Container 81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:24:36.153730 containerd[1592]: time="2025-09-10T05:24:36.153673053Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\"" Sep 10 05:24:36.154331 containerd[1592]: time="2025-09-10T05:24:36.154238712Z" level=info msg="StartContainer for \"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\"" Sep 10 05:24:36.155171 containerd[1592]: time="2025-09-10T05:24:36.155147416Z" level=info msg="connecting to shim 81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" protocol=ttrpc version=3 Sep 10 05:24:36.179731 systemd[1]: Started cri-containerd-81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f.scope - libcontainer container 81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f. Sep 10 05:24:36.210982 containerd[1592]: time="2025-09-10T05:24:36.210936522Z" level=info msg="StartContainer for \"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\" returns successfully" Sep 10 05:24:36.221251 systemd[1]: cri-containerd-81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f.scope: Deactivated successfully. Sep 10 05:24:36.223119 containerd[1592]: time="2025-09-10T05:24:36.223074511Z" level=info msg="received exit event container_id:\"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\" id:\"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\" pid:4642 exited_at:{seconds:1757481876 nanos:222183790}" Sep 10 05:24:36.223377 containerd[1592]: time="2025-09-10T05:24:36.223335278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\" id:\"81f48f14de9a510758c8d2dc2efc641b8bad3e7d84cbfe0624860931e639b26f\" pid:4642 exited_at:{seconds:1757481876 nanos:222183790}" Sep 10 05:24:36.886741 kubelet[2763]: E0910 05:24:36.886704 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:36.888824 containerd[1592]: time="2025-09-10T05:24:36.888734537Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 05:24:36.897247 containerd[1592]: time="2025-09-10T05:24:36.897195136Z" level=info msg="Container 0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:24:36.905082 containerd[1592]: time="2025-09-10T05:24:36.905034770Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\"" Sep 10 05:24:36.905953 containerd[1592]: time="2025-09-10T05:24:36.905925330Z" level=info msg="StartContainer for \"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\"" Sep 10 05:24:36.907099 containerd[1592]: time="2025-09-10T05:24:36.906967810Z" level=info msg="connecting to shim 0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" protocol=ttrpc version=3 Sep 10 05:24:36.944860 systemd[1]: Started cri-containerd-0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19.scope - libcontainer container 0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19. Sep 10 05:24:36.983405 containerd[1592]: time="2025-09-10T05:24:36.983358117Z" level=info msg="StartContainer for \"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\" returns successfully" Sep 10 05:24:36.991961 systemd[1]: cri-containerd-0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19.scope: Deactivated successfully. Sep 10 05:24:36.993031 containerd[1592]: time="2025-09-10T05:24:36.992988900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\" id:\"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\" pid:4688 exited_at:{seconds:1757481876 nanos:992664671}" Sep 10 05:24:36.993182 containerd[1592]: time="2025-09-10T05:24:36.993104621Z" level=info msg="received exit event container_id:\"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\" id:\"0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19\" pid:4688 exited_at:{seconds:1757481876 nanos:992664671}" Sep 10 05:24:37.014933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f9f083f2628cf38c70e578eb22f4ba89483542eee2eaea734fb84bb552bde19-rootfs.mount: Deactivated successfully. Sep 10 05:24:37.891017 kubelet[2763]: E0910 05:24:37.890977 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:37.893664 containerd[1592]: time="2025-09-10T05:24:37.893614822Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 05:24:37.903822 containerd[1592]: time="2025-09-10T05:24:37.903776369Z" level=info msg="Container 35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:24:37.913018 containerd[1592]: time="2025-09-10T05:24:37.912959429Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\"" Sep 10 05:24:37.913626 containerd[1592]: time="2025-09-10T05:24:37.913564433Z" level=info msg="StartContainer for \"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\"" Sep 10 05:24:37.915249 containerd[1592]: time="2025-09-10T05:24:37.915211806Z" level=info msg="connecting to shim 35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" protocol=ttrpc version=3 Sep 10 05:24:37.949780 systemd[1]: Started cri-containerd-35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457.scope - libcontainer container 35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457. Sep 10 05:24:37.995076 systemd[1]: cri-containerd-35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457.scope: Deactivated successfully. Sep 10 05:24:37.996147 containerd[1592]: time="2025-09-10T05:24:37.996120773Z" level=info msg="StartContainer for \"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\" returns successfully" Sep 10 05:24:37.996723 containerd[1592]: time="2025-09-10T05:24:37.996665242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\" id:\"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\" pid:4732 exited_at:{seconds:1757481877 nanos:996378614}" Sep 10 05:24:37.997005 containerd[1592]: time="2025-09-10T05:24:37.996973229Z" level=info msg="received exit event container_id:\"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\" id:\"35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457\" pid:4732 exited_at:{seconds:1757481877 nanos:996378614}" Sep 10 05:24:38.021045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35795543010e14b0872ce322b7e296882818981f3b651b73f3de59b84c524457-rootfs.mount: Deactivated successfully. Sep 10 05:24:38.681447 kubelet[2763]: E0910 05:24:38.681403 2763 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 05:24:38.895661 kubelet[2763]: E0910 05:24:38.895624 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:38.897423 containerd[1592]: time="2025-09-10T05:24:38.897347486Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 05:24:38.909015 containerd[1592]: time="2025-09-10T05:24:38.908967527Z" level=info msg="Container a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:24:38.918344 containerd[1592]: time="2025-09-10T05:24:38.918295166Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\"" Sep 10 05:24:38.918866 containerd[1592]: time="2025-09-10T05:24:38.918818894Z" level=info msg="StartContainer for \"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\"" Sep 10 05:24:38.919697 containerd[1592]: time="2025-09-10T05:24:38.919669967Z" level=info msg="connecting to shim a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" protocol=ttrpc version=3 Sep 10 05:24:38.943722 systemd[1]: Started cri-containerd-a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa.scope - libcontainer container a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa. Sep 10 05:24:38.969209 systemd[1]: cri-containerd-a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa.scope: Deactivated successfully. Sep 10 05:24:38.971486 containerd[1592]: time="2025-09-10T05:24:38.971447954Z" level=info msg="received exit event container_id:\"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\" id:\"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\" pid:4771 exited_at:{seconds:1757481878 nanos:969400809}" Sep 10 05:24:38.972816 containerd[1592]: time="2025-09-10T05:24:38.972772520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\" id:\"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\" pid:4771 exited_at:{seconds:1757481878 nanos:969400809}" Sep 10 05:24:38.979533 containerd[1592]: time="2025-09-10T05:24:38.979484921Z" level=info msg="StartContainer for \"a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa\" returns successfully" Sep 10 05:24:39.021310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0d99501c5bea152f0764fade31cafaf7b8358f98c611dd6dfd8cc966bc83aaa-rootfs.mount: Deactivated successfully. Sep 10 05:24:39.900166 kubelet[2763]: E0910 05:24:39.900130 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:39.901839 containerd[1592]: time="2025-09-10T05:24:39.901787043Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 05:24:39.920617 containerd[1592]: time="2025-09-10T05:24:39.918720534Z" level=info msg="Container d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:24:39.927208 containerd[1592]: time="2025-09-10T05:24:39.927169210Z" level=info msg="CreateContainer within sandbox \"c14021d7d082cbb07a4bd786cc0cb06b8da296b3fa1bd82fa10d084b1a3b5909\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\"" Sep 10 05:24:39.927782 containerd[1592]: time="2025-09-10T05:24:39.927751690Z" level=info msg="StartContainer for \"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\"" Sep 10 05:24:39.928742 containerd[1592]: time="2025-09-10T05:24:39.928715107Z" level=info msg="connecting to shim d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa" address="unix:///run/containerd/s/239cffb4648dfbcbb8b9dbdcfae271eeb607470a650ddbbac399950a711c3efa" protocol=ttrpc version=3 Sep 10 05:24:39.949737 systemd[1]: Started cri-containerd-d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa.scope - libcontainer container d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa. Sep 10 05:24:39.985631 containerd[1592]: time="2025-09-10T05:24:39.985566475Z" level=info msg="StartContainer for \"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" returns successfully" Sep 10 05:24:40.049517 containerd[1592]: time="2025-09-10T05:24:40.049472917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" id:\"7c4322fd9295e386503976b63439cf5ac2c4ee1fcfd36d5ea24df04db4a900a5\" pid:4840 exited_at:{seconds:1757481880 nanos:49197342}" Sep 10 05:24:40.407609 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 10 05:24:40.906369 kubelet[2763]: E0910 05:24:40.906338 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:40.921880 kubelet[2763]: I0910 05:24:40.921809 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gx9qn" podStartSLOduration=5.921788353 podStartE2EDuration="5.921788353s" podCreationTimestamp="2025-09-10 05:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:24:40.921727076 +0000 UTC m=+87.396824525" watchObservedRunningTime="2025-09-10 05:24:40.921788353 +0000 UTC m=+87.396885802" Sep 10 05:24:42.033453 kubelet[2763]: E0910 05:24:42.033412 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:42.242156 containerd[1592]: time="2025-09-10T05:24:42.242114884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" id:\"2e1925fa6f9408da30455981374caa9db43942fa65b1780e21b42e9d4680f004\" pid:5000 exit_status:1 exited_at:{seconds:1757481882 nanos:241830552}" Sep 10 05:24:43.469755 systemd-networkd[1496]: lxc_health: Link UP Sep 10 05:24:43.471301 systemd-networkd[1496]: lxc_health: Gained carrier Sep 10 05:24:44.034451 kubelet[2763]: E0910 05:24:44.034381 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:44.361438 containerd[1592]: time="2025-09-10T05:24:44.360819417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" id:\"260920a5c6eb3394b3f25f4103d67a68568fa55bb95cd652ba09aaf997846e71\" pid:5373 exited_at:{seconds:1757481884 nanos:359893366}" Sep 10 05:24:44.914147 kubelet[2763]: E0910 05:24:44.914120 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:44.976898 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 10 05:24:45.916412 kubelet[2763]: E0910 05:24:45.916363 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:24:46.461673 containerd[1592]: time="2025-09-10T05:24:46.461626284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" id:\"d3870b04e3be2e1ad8caba4fb8fb0f2210c276153562a01b8dbe52f14a3dd78f\" pid:5407 exited_at:{seconds:1757481886 nanos:461273903}" Sep 10 05:24:48.560155 containerd[1592]: time="2025-09-10T05:24:48.559915882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2b8ff1ee65877f58c7de0299ae82977653770bcd5d18cd4226671ce79c5aa\" id:\"eb81192f5e05b6a24a7f7a9275ca3352c2efd0801126007656609fc1e12c041c\" pid:5438 exited_at:{seconds:1757481888 nanos:559543354}" Sep 10 05:24:48.565271 sshd[4573]: Connection closed by 10.0.0.1 port 45072 Sep 10 05:24:48.565710 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Sep 10 05:24:48.570074 systemd[1]: sshd@28-10.0.0.44:22-10.0.0.1:45072.service: Deactivated successfully. Sep 10 05:24:48.571948 systemd[1]: session-29.scope: Deactivated successfully. Sep 10 05:24:48.572718 systemd-logind[1569]: Session 29 logged out. Waiting for processes to exit. Sep 10 05:24:48.573776 systemd-logind[1569]: Removed session 29.