Sep 9 00:18:52.846679 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:18:52.846711 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:18:52.846722 kernel: BIOS-provided physical RAM map: Sep 9 00:18:52.846729 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:18:52.846735 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:18:52.846742 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:18:52.846750 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:18:52.846756 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:18:52.846767 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:18:52.846774 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:18:52.846780 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:18:52.846787 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:18:52.846793 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:18:52.846800 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:18:52.846810 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:18:52.846818 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:18:52.846827 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:18:52.846834 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:18:52.846841 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:18:52.846848 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:18:52.846855 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:18:52.846862 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:18:52.846869 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:18:52.846876 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:18:52.846883 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:18:52.846892 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:18:52.846899 kernel: NX (Execute Disable) protection: active Sep 9 00:18:52.846906 kernel: APIC: Static calls initialized Sep 9 00:18:52.846913 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:18:52.846920 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:18:52.846927 kernel: extended physical RAM map: Sep 9 00:18:52.846935 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:18:52.846942 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:18:52.846949 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:18:52.846956 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:18:52.846963 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:18:52.846972 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:18:52.846979 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:18:52.846986 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:18:52.846994 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:18:52.847004 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:18:52.847011 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:18:52.847021 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:18:52.847029 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:18:52.847036 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:18:52.847043 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:18:52.847051 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:18:52.847070 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:18:52.847084 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:18:52.847103 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:18:52.847122 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:18:52.847136 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:18:52.847145 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:18:52.847174 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:18:52.847183 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:18:52.847194 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:18:52.847203 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:18:52.847212 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:18:52.847226 kernel: efi: EFI v2.7 by EDK II Sep 9 00:18:52.847235 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:18:52.847245 kernel: random: crng init done Sep 9 00:18:52.847258 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:18:52.847267 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:18:52.847287 kernel: secureboot: Secure boot disabled Sep 9 00:18:52.847299 kernel: SMBIOS 2.8 present. Sep 9 00:18:52.847311 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:18:52.847323 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:18:52.847335 kernel: Hypervisor detected: KVM Sep 9 00:18:52.847347 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:18:52.847371 kernel: kvm-clock: using sched offset of 6629712845 cycles Sep 9 00:18:52.847383 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:18:52.847396 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:18:52.847409 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:18:52.847421 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:18:52.847438 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:18:52.847450 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:18:52.847463 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:18:52.847475 kernel: Using GB pages for direct mapping Sep 9 00:18:52.847488 kernel: ACPI: Early table checksum verification disabled Sep 9 00:18:52.847500 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:18:52.847524 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:18:52.847545 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847556 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847571 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:18:52.847580 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847591 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847601 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847611 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:52.847621 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:18:52.847632 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:18:52.847642 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:18:52.847655 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:18:52.847666 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:18:52.847676 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:18:52.847686 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:18:52.847696 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:18:52.847706 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:18:52.847716 kernel: No NUMA configuration found Sep 9 00:18:52.847727 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:18:52.847737 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:18:52.847747 kernel: Zone ranges: Sep 9 00:18:52.847761 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:18:52.847771 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:18:52.847781 kernel: Normal empty Sep 9 00:18:52.847791 kernel: Device empty Sep 9 00:18:52.847801 kernel: Movable zone start for each node Sep 9 00:18:52.847810 kernel: Early memory node ranges Sep 9 00:18:52.847820 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:18:52.847829 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:18:52.847843 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:18:52.847857 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:18:52.847867 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:18:52.847877 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:18:52.847886 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:18:52.847896 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:18:52.847905 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:18:52.847918 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:18:52.847928 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:18:52.847950 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:18:52.847960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:18:52.847969 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:18:52.847980 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:18:52.847992 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:18:52.848003 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:18:52.848013 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:18:52.848023 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:18:52.848033 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:18:52.848047 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:18:52.848057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:18:52.848068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:18:52.848079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:18:52.848089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:18:52.848100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:18:52.848110 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:18:52.848121 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:18:52.848131 kernel: TSC deadline timer available Sep 9 00:18:52.848145 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:18:52.848175 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:18:52.848186 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:18:52.848196 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:18:52.848206 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:18:52.848216 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:18:52.848227 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:18:52.848238 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:18:52.848248 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:18:52.848259 kernel: kvm-guest: setup PV sched yield Sep 9 00:18:52.848273 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:18:52.848283 kernel: Booting paravirtualized kernel on KVM Sep 9 00:18:52.848294 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:18:52.848305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:18:52.848316 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:18:52.848326 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:18:52.848336 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:18:52.848346 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:18:52.848365 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:18:52.848380 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:18:52.848394 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:18:52.848405 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:18:52.848416 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:18:52.848426 kernel: Fallback order for Node 0: 0 Sep 9 00:18:52.848437 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:18:52.848447 kernel: Policy zone: DMA32 Sep 9 00:18:52.848458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:18:52.848472 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:18:52.848483 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:18:52.848493 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:18:52.848504 kernel: Dynamic Preempt: voluntary Sep 9 00:18:52.848514 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:18:52.848526 kernel: rcu: RCU event tracing is enabled. Sep 9 00:18:52.848536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:18:52.848546 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:18:52.848557 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:18:52.848571 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:18:52.848581 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:18:52.848596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:18:52.848607 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:52.848617 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:52.848628 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:52.848638 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:18:52.848649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:18:52.848660 kernel: Console: colour dummy device 80x25 Sep 9 00:18:52.848674 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:18:52.848685 kernel: ACPI: Core revision 20240827 Sep 9 00:18:52.848696 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:18:52.848707 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:18:52.848718 kernel: x2apic enabled Sep 9 00:18:52.848729 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:18:52.848740 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:18:52.848751 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:18:52.848761 kernel: kvm-guest: setup PV IPIs Sep 9 00:18:52.848776 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:18:52.848786 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:18:52.848797 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:18:52.848808 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:18:52.848819 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:18:52.848829 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:18:52.848839 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:18:52.848849 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:18:52.848857 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:18:52.848868 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:18:52.848876 kernel: active return thunk: retbleed_return_thunk Sep 9 00:18:52.848883 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:18:52.848896 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:18:52.848904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:18:52.848912 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:18:52.848920 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:18:52.848929 kernel: active return thunk: srso_return_thunk Sep 9 00:18:52.848939 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:18:52.848947 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:18:52.848955 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:18:52.848963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:18:52.848971 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:18:52.848978 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:18:52.848987 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:18:52.848994 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:18:52.849002 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:18:52.849013 kernel: landlock: Up and running. Sep 9 00:18:52.849020 kernel: SELinux: Initializing. Sep 9 00:18:52.849028 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:18:52.849036 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:18:52.849044 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:18:52.849052 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:18:52.849060 kernel: ... version: 0 Sep 9 00:18:52.849068 kernel: ... bit width: 48 Sep 9 00:18:52.849076 kernel: ... generic registers: 6 Sep 9 00:18:52.849086 kernel: ... value mask: 0000ffffffffffff Sep 9 00:18:52.849093 kernel: ... max period: 00007fffffffffff Sep 9 00:18:52.849101 kernel: ... fixed-purpose events: 0 Sep 9 00:18:52.849109 kernel: ... event mask: 000000000000003f Sep 9 00:18:52.849117 kernel: signal: max sigframe size: 1776 Sep 9 00:18:52.849125 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:18:52.849135 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:18:52.849143 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:18:52.849185 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:18:52.849212 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:18:52.849234 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:18:52.849244 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:18:52.849254 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:18:52.849266 kernel: Memory: 2424720K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 9 00:18:52.849277 kernel: devtmpfs: initialized Sep 9 00:18:52.849289 kernel: x86/mm: Memory block size: 128MB Sep 9 00:18:52.849299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:18:52.849310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:18:52.849331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:18:52.849343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:18:52.849353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:18:52.849374 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:18:52.849385 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:18:52.849395 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:18:52.849406 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:18:52.849416 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:18:52.849426 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:18:52.849442 kernel: audit: type=2000 audit(1757377129.518:1): state=initialized audit_enabled=0 res=1 Sep 9 00:18:52.849452 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:18:52.849462 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:18:52.849472 kernel: cpuidle: using governor menu Sep 9 00:18:52.849483 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:18:52.849493 kernel: dca service started, version 1.12.1 Sep 9 00:18:52.849503 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:18:52.849513 kernel: PCI: Using configuration type 1 for base access Sep 9 00:18:52.849523 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:18:52.849537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:18:52.849547 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:18:52.849557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:18:52.849567 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:18:52.849578 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:18:52.849588 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:18:52.849597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:18:52.849607 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:18:52.849617 kernel: ACPI: Interpreter enabled Sep 9 00:18:52.849630 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:18:52.849640 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:18:52.849650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:18:52.849660 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:18:52.849670 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:18:52.849681 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:18:52.850101 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:18:52.850291 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:18:52.850690 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:18:52.850745 kernel: PCI host bridge to bus 0000:00 Sep 9 00:18:52.850921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:18:52.851037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:18:52.851276 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:18:52.851409 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:18:52.851520 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:18:52.851639 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:18:52.851760 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:18:52.851918 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:18:52.852084 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:18:52.852245 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:18:52.852417 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:18:52.852577 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:18:52.852745 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:18:52.852916 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:18:52.853062 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:18:52.853226 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:18:52.853394 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:18:52.853590 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:18:52.853741 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:18:52.853888 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:18:52.854040 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:18:52.854220 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:18:52.854387 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:18:52.854539 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:18:52.854940 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:18:52.855095 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:18:52.855278 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:18:52.855443 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:18:52.855617 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:18:52.855765 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:18:52.855911 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:18:52.856087 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:18:52.856283 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:18:52.856300 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:18:52.856311 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:18:52.856321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:18:52.856332 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:18:52.856343 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:18:52.856354 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:18:52.856380 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:18:52.856390 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:18:52.856401 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:18:52.856412 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:18:52.856422 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:18:52.856433 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:18:52.856453 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:18:52.856467 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:18:52.856478 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:18:52.856492 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:18:52.856503 kernel: iommu: Default domain type: Translated Sep 9 00:18:52.856514 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:18:52.856524 kernel: efivars: Registered efivars operations Sep 9 00:18:52.856535 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:18:52.856546 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:18:52.856557 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:18:52.856567 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:18:52.856577 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:18:52.856591 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:18:52.856602 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:18:52.856612 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:18:52.856623 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:18:52.856633 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:18:52.856789 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:18:52.856936 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:18:52.857082 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:18:52.857101 kernel: vgaarb: loaded Sep 9 00:18:52.857112 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:18:52.857123 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:18:52.857133 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:18:52.857144 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:18:52.857177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:18:52.857188 kernel: pnp: PnP ACPI init Sep 9 00:18:52.857401 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:18:52.857424 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:18:52.857435 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:18:52.857447 kernel: NET: Registered PF_INET protocol family Sep 9 00:18:52.857457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:18:52.857469 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:18:52.857480 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:18:52.857491 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:18:52.857502 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:18:52.857513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:18:52.857527 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:18:52.857538 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:18:52.857549 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:18:52.857560 kernel: NET: Registered PF_XDP protocol family Sep 9 00:18:52.857715 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:18:52.857867 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:18:52.858006 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:18:52.858139 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:18:52.858314 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:18:52.858458 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:18:52.858590 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:18:52.858731 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:18:52.858753 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:18:52.858764 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:18:52.858776 kernel: Initialise system trusted keyrings Sep 9 00:18:52.858790 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:18:52.858801 kernel: Key type asymmetric registered Sep 9 00:18:52.858812 kernel: Asymmetric key parser 'x509' registered Sep 9 00:18:52.858823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:18:52.858834 kernel: io scheduler mq-deadline registered Sep 9 00:18:52.858845 kernel: io scheduler kyber registered Sep 9 00:18:52.858856 kernel: io scheduler bfq registered Sep 9 00:18:52.858870 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:18:52.858882 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:18:52.858893 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:18:52.858904 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:18:52.858915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:18:52.858926 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:18:52.858938 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:18:52.858949 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:18:52.858960 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:18:52.859138 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:18:52.859172 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:18:52.859302 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:18:52.859435 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:18:52 UTC (1757377132) Sep 9 00:18:52.859552 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:18:52.859562 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:18:52.859571 kernel: efifb: probing for efifb Sep 9 00:18:52.859580 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:18:52.859592 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:18:52.859601 kernel: efifb: scrolling: redraw Sep 9 00:18:52.859609 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:18:52.859617 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:18:52.859626 kernel: fb0: EFI VGA frame buffer device Sep 9 00:18:52.859634 kernel: pstore: Using crash dump compression: deflate Sep 9 00:18:52.859642 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:18:52.859651 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:18:52.859659 kernel: Segment Routing with IPv6 Sep 9 00:18:52.859669 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:18:52.859678 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:18:52.859686 kernel: Key type dns_resolver registered Sep 9 00:18:52.859694 kernel: IPI shorthand broadcast: enabled Sep 9 00:18:52.859704 kernel: sched_clock: Marking stable (3812003297, 161565235)->(3992539196, -18970664) Sep 9 00:18:52.859715 kernel: registered taskstats version 1 Sep 9 00:18:52.859726 kernel: Loading compiled-in X.509 certificates Sep 9 00:18:52.859738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:18:52.859749 kernel: Demotion targets for Node 0: null Sep 9 00:18:52.859762 kernel: Key type .fscrypt registered Sep 9 00:18:52.859773 kernel: Key type fscrypt-provisioning registered Sep 9 00:18:52.859784 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:18:52.859793 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:18:52.859801 kernel: ima: No architecture policies found Sep 9 00:18:52.859809 kernel: clk: Disabling unused clocks Sep 9 00:18:52.859818 kernel: Warning: unable to open an initial console. Sep 9 00:18:52.859827 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:18:52.859835 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:18:52.859846 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:18:52.859854 kernel: Run /init as init process Sep 9 00:18:52.859862 kernel: with arguments: Sep 9 00:18:52.859871 kernel: /init Sep 9 00:18:52.859879 kernel: with environment: Sep 9 00:18:52.859887 kernel: HOME=/ Sep 9 00:18:52.859895 kernel: TERM=linux Sep 9 00:18:52.859903 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:18:52.859917 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:18:52.859932 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:18:52.859942 systemd[1]: Detected virtualization kvm. Sep 9 00:18:52.859951 systemd[1]: Detected architecture x86-64. Sep 9 00:18:52.859959 systemd[1]: Running in initrd. Sep 9 00:18:52.859968 systemd[1]: No hostname configured, using default hostname. Sep 9 00:18:52.859977 systemd[1]: Hostname set to . Sep 9 00:18:52.859986 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:18:52.859997 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:18:52.860006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:18:52.860015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:18:52.860025 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:18:52.860034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:18:52.860043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:18:52.860052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:18:52.860065 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:18:52.860074 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:18:52.860083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:18:52.860092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:18:52.860101 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:18:52.860110 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:18:52.860118 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:18:52.860127 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:18:52.860138 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:18:52.860163 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:18:52.860172 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:18:52.860181 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:18:52.860190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:18:52.860199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:18:52.860208 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:18:52.860217 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:18:52.860226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:18:52.860238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:18:52.860247 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:18:52.860256 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:18:52.860265 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:18:52.860274 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:18:52.860283 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:18:52.860292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:52.860301 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:18:52.860312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:18:52.860321 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:18:52.860331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:18:52.860386 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:18:52.860413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:52.860423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:18:52.860432 systemd-journald[220]: Journal started Sep 9 00:18:52.860457 systemd-journald[220]: Runtime Journal (/run/log/journal/7183bded438540ca98668e764ff89d24) is 6M, max 48.5M, 42.4M free. Sep 9 00:18:52.847638 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 00:18:52.869653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:18:52.873199 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:18:52.877376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:18:52.880438 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:18:52.881772 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 00:18:52.882919 kernel: Bridge firewalling registered Sep 9 00:18:52.883056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:18:52.891510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:18:52.893424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:52.896026 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:18:52.900260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:18:52.900968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:18:52.909337 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:18:52.914484 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:18:52.918656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:52.922038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:18:52.926944 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:18:52.976782 systemd-resolved[271]: Positive Trust Anchors: Sep 9 00:18:52.976805 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:18:52.976836 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:18:52.979890 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 9 00:18:52.985417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:18:52.985720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:18:53.038192 kernel: SCSI subsystem initialized Sep 9 00:18:53.048183 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:18:53.059188 kernel: iscsi: registered transport (tcp) Sep 9 00:18:53.086244 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:18:53.086341 kernel: QLogic iSCSI HBA Driver Sep 9 00:18:53.107525 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:18:53.134082 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:18:53.134797 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:18:53.231258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:18:53.233434 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:18:53.301203 kernel: raid6: avx2x4 gen() 26211 MB/s Sep 9 00:18:53.318183 kernel: raid6: avx2x2 gen() 20765 MB/s Sep 9 00:18:53.335647 kernel: raid6: avx2x1 gen() 16727 MB/s Sep 9 00:18:53.335667 kernel: raid6: using algorithm avx2x4 gen() 26211 MB/s Sep 9 00:18:53.353427 kernel: raid6: .... xor() 6403 MB/s, rmw enabled Sep 9 00:18:53.353452 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:18:53.376212 kernel: xor: automatically using best checksumming function avx Sep 9 00:18:53.559211 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:18:53.569967 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:18:53.573396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:18:53.605321 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 00:18:53.611392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:18:53.612973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:18:53.649291 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 9 00:18:53.684221 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:18:53.688356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:18:53.774109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:18:53.777661 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:18:53.815195 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:18:53.817000 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:18:53.821353 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:18:53.821379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:18:53.821390 kernel: GPT:9289727 != 19775487 Sep 9 00:18:53.821402 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:18:53.821413 kernel: GPT:9289727 != 19775487 Sep 9 00:18:53.821423 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:18:53.822320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:53.836173 kernel: AES CTR mode by8 optimization enabled Sep 9 00:18:53.836216 kernel: libata version 3.00 loaded. Sep 9 00:18:53.844205 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:18:53.851170 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:18:53.853280 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:18:53.858443 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:18:53.858701 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:18:53.858852 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:18:53.864064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:53.865578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:53.869175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:53.872160 kernel: scsi host0: ahci Sep 9 00:18:53.872345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:53.874298 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:18:53.877284 kernel: scsi host1: ahci Sep 9 00:18:53.879173 kernel: scsi host2: ahci Sep 9 00:18:53.879640 kernel: scsi host3: ahci Sep 9 00:18:53.886180 kernel: scsi host4: ahci Sep 9 00:18:53.905195 kernel: scsi host5: ahci Sep 9 00:18:53.905513 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:18:53.905525 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:18:53.905536 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:18:53.905546 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:18:53.905556 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:18:53.905567 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:18:53.915278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:18:53.918558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:53.931240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:18:53.941365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:18:53.941897 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:18:53.956651 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:18:53.958870 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:18:53.997211 disk-uuid[638]: Primary Header is updated. Sep 9 00:18:53.997211 disk-uuid[638]: Secondary Entries is updated. Sep 9 00:18:53.997211 disk-uuid[638]: Secondary Header is updated. Sep 9 00:18:54.001175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:54.214198 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:54.214287 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:18:54.215181 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:54.216192 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:54.216268 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:54.217179 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:18:54.218579 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:18:54.218595 kernel: ata3.00: applying bridge limits Sep 9 00:18:54.219175 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:54.220196 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:18:54.220241 kernel: ata3.00: configured for UDMA/100 Sep 9 00:18:54.221195 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:18:54.276189 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:18:54.276436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:18:54.290239 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:18:54.728650 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:18:54.730254 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:18:54.732026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:18:54.733213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:18:54.736390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:18:54.775549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:18:55.011178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:55.011756 disk-uuid[639]: The operation has completed successfully. Sep 9 00:18:55.048131 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:18:55.048329 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:18:55.084298 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:18:55.110633 sh[669]: Success Sep 9 00:18:55.129477 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:18:55.129513 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:18:55.130617 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:18:55.140175 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:18:55.175946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:18:55.179835 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:18:55.206462 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:18:55.212173 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (681) Sep 9 00:18:55.214213 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:18:55.214262 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:55.219614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:18:55.219651 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:18:55.221024 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:18:55.222070 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:18:55.223242 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:18:55.225538 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:18:55.226743 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:18:55.273842 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Sep 9 00:18:55.273912 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:18:55.273928 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:55.278188 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:18:55.278221 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:18:55.283201 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:18:55.284511 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:18:55.288494 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:18:55.610852 ignition[761]: Ignition 2.21.0 Sep 9 00:18:55.610864 ignition[761]: Stage: fetch-offline Sep 9 00:18:55.610900 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:55.610909 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:55.611002 ignition[761]: parsed url from cmdline: "" Sep 9 00:18:55.611005 ignition[761]: no config URL provided Sep 9 00:18:55.611011 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:18:55.611019 ignition[761]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:18:55.611044 ignition[761]: op(1): [started] loading QEMU firmware config module Sep 9 00:18:55.611049 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:18:55.622802 ignition[761]: op(1): [finished] loading QEMU firmware config module Sep 9 00:18:55.629539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:18:55.635363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:18:55.665937 ignition[761]: parsing config with SHA512: 3a1a1d704b0acd68a423a3ab8c31ae8b6c71bb1dde280847d0523047e04ffdaa5437bc8d3530a4ee9079b52e67d3d104d805bfb9d58f1217719fcf3878ae4344 Sep 9 00:18:55.675591 unknown[761]: fetched base config from "system" Sep 9 00:18:55.675607 unknown[761]: fetched user config from "qemu" Sep 9 00:18:55.676077 ignition[761]: fetch-offline: fetch-offline passed Sep 9 00:18:55.676184 ignition[761]: Ignition finished successfully Sep 9 00:18:55.683466 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:18:55.691924 systemd-networkd[859]: lo: Link UP Sep 9 00:18:55.691935 systemd-networkd[859]: lo: Gained carrier Sep 9 00:18:55.695308 systemd-networkd[859]: Enumeration completed Sep 9 00:18:55.695552 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:18:55.698601 systemd[1]: Reached target network.target - Network. Sep 9 00:18:55.698996 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:18:55.700012 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:18:55.714206 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:55.714212 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:18:55.715880 systemd-networkd[859]: eth0: Link UP Sep 9 00:18:55.716586 systemd-networkd[859]: eth0: Gained carrier Sep 9 00:18:55.716608 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:55.740247 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:18:55.764008 ignition[863]: Ignition 2.21.0 Sep 9 00:18:55.764022 ignition[863]: Stage: kargs Sep 9 00:18:55.764353 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:55.764365 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:55.765086 ignition[863]: kargs: kargs passed Sep 9 00:18:55.765134 ignition[863]: Ignition finished successfully Sep 9 00:18:55.770343 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:18:55.773744 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:18:55.812167 ignition[872]: Ignition 2.21.0 Sep 9 00:18:55.813212 ignition[872]: Stage: disks Sep 9 00:18:55.814706 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:55.814733 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:55.815917 ignition[872]: disks: disks passed Sep 9 00:18:55.815974 ignition[872]: Ignition finished successfully Sep 9 00:18:55.820574 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:18:55.821910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:18:55.822440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:18:55.822785 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:18:55.823144 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:18:55.823693 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:18:55.832447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:18:55.869477 systemd-resolved[271]: Detected conflict on linux IN A 10.0.0.67 Sep 9 00:18:55.869497 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 9 00:18:55.877038 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:18:56.130198 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:18:56.130422 systemd-resolved[271]: Detected conflict on linux3 IN A 10.0.0.67 Sep 9 00:18:56.130435 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux3' to 'linux12'. Sep 9 00:18:56.132027 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:18:56.258199 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:18:56.259464 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:18:56.260801 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:18:56.263763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:18:56.265435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:18:56.266186 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:18:56.266237 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:18:56.266281 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:18:56.285012 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:18:56.289140 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:18:56.294324 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Sep 9 00:18:56.294352 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:18:56.294365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:56.297487 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:18:56.297514 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:18:56.299495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:18:56.333748 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:18:56.339218 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:18:56.344743 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:18:56.351086 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:18:56.474780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:18:56.477101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:18:56.478656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:18:56.532666 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:18:56.534016 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:18:56.549340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:18:56.597746 ignition[1003]: INFO : Ignition 2.21.0 Sep 9 00:18:56.597746 ignition[1003]: INFO : Stage: mount Sep 9 00:18:56.599751 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:56.599751 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:56.599751 ignition[1003]: INFO : mount: mount passed Sep 9 00:18:56.599751 ignition[1003]: INFO : Ignition finished successfully Sep 9 00:18:56.601952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:18:56.604808 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:18:57.119419 systemd-networkd[859]: eth0: Gained IPv6LL Sep 9 00:18:57.261355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:18:57.303993 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 9 00:18:57.304056 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:18:57.304068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:57.308850 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:18:57.308874 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:18:57.311287 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:18:57.357641 ignition[1034]: INFO : Ignition 2.21.0 Sep 9 00:18:57.357641 ignition[1034]: INFO : Stage: files Sep 9 00:18:57.360218 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:57.361894 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:57.364295 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:18:57.365855 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:18:57.365855 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:18:57.371021 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:18:57.372739 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:18:57.374738 unknown[1034]: wrote ssh authorized keys file for user: core Sep 9 00:18:57.376234 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:18:57.377999 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:18:57.377999 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:18:57.436980 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:18:57.642751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:18:57.642751 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:18:57.646949 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:18:57.876116 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:18:58.215516 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:18:58.215516 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:18:58.219287 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:18:58.220965 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:18:58.222936 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:18:58.224609 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:18:58.226489 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:18:58.228225 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:18:58.229972 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:18:58.236846 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:18:58.238887 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:18:58.240728 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:58.244478 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:58.244478 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:58.249174 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:18:58.644630 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:18:59.450912 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:59.450912 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:18:59.455020 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:18:59.462485 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:18:59.462485 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:18:59.462485 ignition[1034]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:18:59.466989 ignition[1034]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:18:59.466989 ignition[1034]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:18:59.466989 ignition[1034]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:18:59.466989 ignition[1034]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:18:59.487014 ignition[1034]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:18:59.494170 ignition[1034]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:18:59.495723 ignition[1034]: INFO : files: files passed Sep 9 00:18:59.495723 ignition[1034]: INFO : Ignition finished successfully Sep 9 00:18:59.501379 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:18:59.503955 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:18:59.506400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:18:59.534483 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:18:59.534635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:18:59.538464 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:18:59.543466 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:59.545271 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:59.545271 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:59.549912 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:18:59.550596 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:18:59.553662 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:18:59.604884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:18:59.605072 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:18:59.605969 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:18:59.608549 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:18:59.608915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:18:59.611868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:18:59.639464 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:18:59.642976 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:18:59.677578 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:18:59.678127 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:18:59.681287 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:18:59.683466 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:18:59.683597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:18:59.686769 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:18:59.687593 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:18:59.690266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:18:59.692056 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:18:59.694012 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:18:59.696087 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:18:59.696412 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:18:59.700096 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:18:59.701967 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:18:59.702501 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:18:59.706242 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:18:59.708385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:18:59.708501 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:18:59.709420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:18:59.712269 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:18:59.714195 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:18:59.715998 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:18:59.716665 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:18:59.716772 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:18:59.717491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:18:59.717600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:18:59.722625 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:18:59.724420 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:18:59.729305 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:18:59.732274 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:18:59.732852 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:18:59.733253 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:18:59.733393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:18:59.733796 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:18:59.733905 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:18:59.739625 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:18:59.739821 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:18:59.740913 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:18:59.741056 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:18:59.744261 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:18:59.747025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:18:59.747204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:18:59.762783 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:18:59.764632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:18:59.764800 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:18:59.767423 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:18:59.767575 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:18:59.773844 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:18:59.776280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:18:59.779976 ignition[1090]: INFO : Ignition 2.21.0 Sep 9 00:18:59.779976 ignition[1090]: INFO : Stage: umount Sep 9 00:18:59.779976 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:59.779976 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:59.783982 ignition[1090]: INFO : umount: umount passed Sep 9 00:18:59.783982 ignition[1090]: INFO : Ignition finished successfully Sep 9 00:18:59.786349 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:18:59.786519 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:18:59.788284 systemd[1]: Stopped target network.target - Network. Sep 9 00:18:59.788607 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:18:59.788659 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:18:59.789000 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:18:59.789044 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:18:59.793509 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:18:59.793571 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:18:59.795550 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:18:59.795602 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:18:59.796040 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:18:59.799726 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:18:59.805093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:18:59.809935 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:18:59.810098 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:18:59.815930 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:18:59.816425 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:18:59.816514 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:18:59.821059 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:18:59.823059 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:18:59.823246 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:18:59.827461 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:18:59.827646 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:18:59.831379 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:18:59.831452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:18:59.834906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:18:59.835537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:18:59.835594 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:18:59.835979 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:18:59.836036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:59.841879 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:18:59.841946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:18:59.842697 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:18:59.844683 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:18:59.864267 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:18:59.864512 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:18:59.866283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:18:59.866367 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:18:59.868210 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:18:59.868260 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:18:59.868606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:18:59.868667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:18:59.869398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:18:59.869453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:18:59.877193 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:18:59.877254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:59.884358 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:18:59.886557 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:18:59.886643 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:18:59.889987 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:18:59.890039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:18:59.893373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:59.894448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:59.897450 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:18:59.897572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:18:59.898332 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:18:59.898433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:19:00.268792 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:19:00.268987 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:19:00.270040 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:19:00.286306 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:19:00.286420 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:19:00.289575 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:19:00.323438 systemd[1]: Switching root. Sep 9 00:19:00.366519 systemd-journald[220]: Journal stopped Sep 9 00:19:01.759465 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:19:01.759553 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:19:01.759571 kernel: SELinux: policy capability open_perms=1 Sep 9 00:19:01.759592 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:19:01.759607 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:19:01.759628 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:19:01.759649 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:19:01.759664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:19:01.759679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:19:01.759693 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:19:01.759715 kernel: audit: type=1403 audit(1757377140.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:19:01.759731 systemd[1]: Successfully loaded SELinux policy in 45.169ms. Sep 9 00:19:01.759763 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.141ms. Sep 9 00:19:01.759780 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:19:01.759797 systemd[1]: Detected virtualization kvm. Sep 9 00:19:01.759812 systemd[1]: Detected architecture x86-64. Sep 9 00:19:01.759827 systemd[1]: Detected first boot. Sep 9 00:19:01.759843 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:19:01.759859 zram_generator::config[1135]: No configuration found. Sep 9 00:19:01.759876 kernel: Guest personality initialized and is inactive Sep 9 00:19:01.759894 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:19:01.759908 kernel: Initialized host personality Sep 9 00:19:01.759923 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:19:01.759939 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:19:01.759955 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:19:01.759971 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:19:01.759988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:19:01.760004 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:01.760020 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:19:01.760039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:19:01.760055 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:19:01.760071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:19:01.760086 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:19:01.760112 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:19:01.760128 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:19:01.760144 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:19:01.760176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:01.760193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:01.760213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:19:01.760229 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:19:01.760246 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:19:01.760263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:19:01.760279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:19:01.760294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:01.760310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:01.760332 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:19:01.760351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:19:01.760367 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:19:01.760382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:19:01.760398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:01.760420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:19:01.760436 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:19:01.760453 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:19:01.760468 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:19:01.760484 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:19:01.760503 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:19:01.760530 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:01.760546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:01.760562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:01.760578 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:19:01.760593 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:19:01.760609 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:19:01.760624 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:19:01.760640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:01.760659 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:19:01.760675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:19:01.760690 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:19:01.760726 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:19:01.760742 systemd[1]: Reached target machines.target - Containers. Sep 9 00:19:01.760757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:19:01.760773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:01.760789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:19:01.760808 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:19:01.760823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:01.760838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:01.760853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:01.760869 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:19:01.760896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:01.760912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:19:01.760927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:19:01.760946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:19:01.760962 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:19:01.760978 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:19:01.760993 kernel: fuse: init (API version 7.41) Sep 9 00:19:01.761009 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:01.761025 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:19:01.761040 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:19:01.761056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:19:01.761075 kernel: ACPI: bus type drm_connector registered Sep 9 00:19:01.761089 kernel: loop: module loaded Sep 9 00:19:01.761113 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:19:01.761131 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:19:01.761176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:19:01.761193 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:19:01.761212 systemd[1]: Stopped verity-setup.service. Sep 9 00:19:01.761228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:01.761245 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:19:01.761261 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:19:01.761277 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:19:01.761293 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:19:01.761308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:19:01.761351 systemd-journald[1213]: Collecting audit messages is disabled. Sep 9 00:19:01.761383 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:19:01.761400 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:19:01.761415 systemd-journald[1213]: Journal started Sep 9 00:19:01.761447 systemd-journald[1213]: Runtime Journal (/run/log/journal/7183bded438540ca98668e764ff89d24) is 6M, max 48.5M, 42.4M free. Sep 9 00:19:01.488085 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:19:01.514456 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:19:01.515023 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:19:01.764318 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:19:01.765716 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:01.767409 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:19:01.767644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:19:01.769104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:01.769352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:01.770784 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:01.771008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:01.772470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:01.772690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:01.774255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:19:01.774475 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:19:01.775846 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:01.776057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:01.777478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:01.779020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:19:01.780693 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:19:01.782342 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:19:01.799799 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:19:01.802802 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:19:01.805011 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:19:01.806140 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:19:01.806190 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:19:01.808238 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:19:01.820332 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:19:01.821885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:01.823826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:19:01.827340 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:19:01.828597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:01.829830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:19:01.831139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:01.833360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:19:01.836381 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:19:01.841754 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:19:01.844853 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:19:01.846342 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:19:01.858739 systemd-journald[1213]: Time spent on flushing to /var/log/journal/7183bded438540ca98668e764ff89d24 is 27.296ms for 1074 entries. Sep 9 00:19:01.858739 systemd-journald[1213]: System Journal (/var/log/journal/7183bded438540ca98668e764ff89d24) is 8M, max 195.6M, 187.6M free. Sep 9 00:19:01.980963 systemd-journald[1213]: Received client request to flush runtime journal. Sep 9 00:19:01.981254 kernel: loop0: detected capacity change from 0 to 113872 Sep 9 00:19:01.981295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:19:01.913912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:01.928559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:19:01.932579 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:19:01.935753 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:19:01.949794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:01.983570 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:19:01.994272 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:19:01.996098 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:19:02.000621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:19:02.002341 kernel: loop1: detected capacity change from 0 to 229808 Sep 9 00:19:02.028199 kernel: loop2: detected capacity change from 0 to 146240 Sep 9 00:19:02.040891 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 00:19:02.040909 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 00:19:02.053521 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:02.090914 kernel: loop3: detected capacity change from 0 to 113872 Sep 9 00:19:02.102183 kernel: loop4: detected capacity change from 0 to 229808 Sep 9 00:19:02.114248 kernel: loop5: detected capacity change from 0 to 146240 Sep 9 00:19:02.142128 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:19:02.142750 (sd-merge)[1277]: Merged extensions into '/usr'. Sep 9 00:19:02.149985 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:19:02.150141 systemd[1]: Reloading... Sep 9 00:19:02.220193 zram_generator::config[1300]: No configuration found. Sep 9 00:19:02.363180 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:19:02.364690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:02.458125 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:19:02.458282 systemd[1]: Reloading finished in 307 ms. Sep 9 00:19:02.487238 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:19:02.488998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:19:02.508199 systemd[1]: Starting ensure-sysext.service... Sep 9 00:19:02.510350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:19:02.525211 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:19:02.525234 systemd[1]: Reloading... Sep 9 00:19:02.545583 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:19:02.545656 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:19:02.546020 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:19:02.546478 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:19:02.547744 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:19:02.548119 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 9 00:19:02.548215 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 9 00:19:02.556035 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:02.556046 systemd-tmpfiles[1341]: Skipping /boot Sep 9 00:19:02.636007 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:02.636219 systemd-tmpfiles[1341]: Skipping /boot Sep 9 00:19:02.654173 zram_generator::config[1364]: No configuration found. Sep 9 00:19:02.876057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:02.972860 systemd[1]: Reloading finished in 447 ms. Sep 9 00:19:02.999389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:19:03.017532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:03.031345 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:19:03.034839 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:19:03.062883 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:19:03.068801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:19:03.074076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:03.077241 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:19:03.082348 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.082525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:03.090541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:03.095453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:03.100649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:03.102495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:03.104300 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:03.104400 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.105829 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:19:03.107948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:03.108200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:03.110243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:03.110492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:03.112378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:03.112599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:03.123866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.124129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:03.126391 augenrules[1439]: No rules Sep 9 00:19:03.126918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:03.129541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:03.130875 systemd-udevd[1417]: Using default interface naming scheme 'v255'. Sep 9 00:19:03.133430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:03.134732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:03.134850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:03.140766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:19:03.143869 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:19:03.144951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.147367 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:19:03.147719 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:19:03.149995 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:19:03.152683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:19:03.154919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:03.155185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:03.157176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:03.157470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:03.159520 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:03.159772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:03.161521 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:03.163702 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:19:03.187751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.190046 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:19:03.191227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:03.193296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:03.195441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:03.203583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:03.207598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:03.209415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:03.209540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:19:03.220500 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:19:03.221663 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:19:03.221772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:19:03.223682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:03.223938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:03.227572 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:03.227807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:03.234265 systemd[1]: Finished ensure-sysext.service. Sep 9 00:19:03.235530 augenrules[1484]: /sbin/augenrules: No change Sep 9 00:19:03.241883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:03.245362 augenrules[1516]: No rules Sep 9 00:19:03.252346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:03.254113 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:19:03.256063 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:19:03.256376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:19:03.267004 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:03.273371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:03.293064 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:19:03.305503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:03.305593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:03.308830 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:19:03.322919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:19:03.329932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:19:03.360722 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:19:03.363198 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:19:03.369184 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:19:03.383200 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:19:03.408007 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:19:03.408354 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:19:03.410178 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:19:03.449920 systemd-networkd[1496]: lo: Link UP Sep 9 00:19:03.449937 systemd-networkd[1496]: lo: Gained carrier Sep 9 00:19:03.451799 systemd-networkd[1496]: Enumeration completed Sep 9 00:19:03.451961 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:19:03.453104 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:03.453117 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:19:03.453776 systemd-networkd[1496]: eth0: Link UP Sep 9 00:19:03.453963 systemd-networkd[1496]: eth0: Gained carrier Sep 9 00:19:03.453990 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:03.458092 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:19:03.461903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:19:03.467267 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:19:03.474899 systemd-resolved[1410]: Positive Trust Anchors: Sep 9 00:19:03.474915 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:19:03.474947 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:19:03.478701 systemd-resolved[1410]: Defaulting to hostname 'linux'. Sep 9 00:19:03.480671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:19:03.481908 systemd[1]: Reached target network.target - Network. Sep 9 00:19:03.482814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:03.510313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:19:04.243987 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:19:04.244073 systemd-timesyncd[1533]: Initial clock synchronization to Tue 2025-09-09 00:19:04.242494 UTC. Sep 9 00:19:04.244120 systemd-resolved[1410]: Clock change detected. Flushing caches. Sep 9 00:19:04.244775 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:19:04.247641 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:19:04.251230 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:19:04.252610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:19:04.254883 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:19:04.257426 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:19:04.258779 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:19:04.258831 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:19:04.259872 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:19:04.262139 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:19:04.263412 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:19:04.264702 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:19:04.267879 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:19:04.272524 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:19:04.281165 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:19:04.284098 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:19:04.285863 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:19:04.325951 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:19:04.328034 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:19:04.330247 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:19:04.336868 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:19:04.338510 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:19:04.340045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:04.340136 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:04.342993 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:19:04.348083 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:19:04.352074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:19:04.357292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:19:04.359723 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:19:04.360904 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:19:04.365885 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:19:04.375587 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:19:04.379201 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:19:04.380224 jq[1564]: false Sep 9 00:19:04.384846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:19:04.391228 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:19:04.394094 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Sep 9 00:19:04.394122 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Sep 9 00:19:04.398949 extend-filesystems[1565]: Found /dev/vda6 Sep 9 00:19:04.404622 extend-filesystems[1565]: Found /dev/vda9 Sep 9 00:19:04.407148 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Sep 9 00:19:04.407234 extend-filesystems[1565]: Checking size of /dev/vda9 Sep 9 00:19:04.408139 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:19:04.408139 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Sep 9 00:19:04.407678 oslogin_cache_refresh[1566]: Failure getting users, quitting Sep 9 00:19:04.407709 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:19:04.407788 oslogin_cache_refresh[1566]: Refreshing group entry cache Sep 9 00:19:04.408430 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:19:04.411845 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:19:04.412544 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:19:04.413456 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:19:04.414954 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Sep 9 00:19:04.415039 oslogin_cache_refresh[1566]: Failure getting groups, quitting Sep 9 00:19:04.415097 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:19:04.415142 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:19:04.464565 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:19:04.472788 kernel: kvm_amd: TSC scaling supported Sep 9 00:19:04.472844 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:19:04.472872 kernel: kvm_amd: Nested Paging enabled Sep 9 00:19:04.472888 kernel: kvm_amd: LBR virtualization supported Sep 9 00:19:04.473469 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:19:04.474819 kernel: kvm_amd: Virtual GIF supported Sep 9 00:19:04.473388 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:19:04.475430 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:19:04.475924 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:19:04.478521 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:19:04.479077 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:19:04.482453 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:19:04.483030 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:19:04.485817 extend-filesystems[1565]: Resized partition /dev/vda9 Sep 9 00:19:04.489377 extend-filesystems[1593]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:19:04.489466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:19:04.489787 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:19:04.496780 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:19:04.525127 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:19:04.526284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:04.553681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:19:04.554150 update_engine[1579]: I20250909 00:19:04.536710 1579 main.cc:92] Flatcar Update Engine starting Sep 9 00:19:04.554516 tar[1592]: linux-amd64/LICENSE Sep 9 00:19:04.555019 jq[1588]: true Sep 9 00:19:04.563387 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:19:04.563387 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:19:04.563387 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:19:04.571648 tar[1592]: linux-amd64/helm Sep 9 00:19:04.557153 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:19:04.572119 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Sep 9 00:19:04.558036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:04.573373 jq[1604]: true Sep 9 00:19:04.566922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:04.570111 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:19:04.570542 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:19:04.575357 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:19:04.576575 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:19:04.582840 systemd-logind[1576]: New seat seat0. Sep 9 00:19:04.587324 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:19:04.618961 dbus-daemon[1562]: [system] SELinux support is enabled Sep 9 00:19:04.619480 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:19:04.638527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:19:04.654496 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:19:04.638548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:19:04.639859 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:19:04.639875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:19:04.658808 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:19:04.659930 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:19:04.664926 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:19:04.666380 update_engine[1579]: I20250909 00:19:04.665879 1579 update_check_scheduler.cc:74] Next update check in 10m52s Sep 9 00:19:04.694944 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:19:04.711330 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:19:04.724430 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:19:04.731120 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:19:04.756516 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:19:04.761145 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:19:04.768354 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:19:04.804942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:04.806722 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:19:04.807014 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:19:04.811084 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:19:04.888117 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:19:04.893687 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:19:04.897077 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:19:04.898973 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:19:04.972838 containerd[1603]: time="2025-09-09T00:19:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:19:04.973860 containerd[1603]: time="2025-09-09T00:19:04.973827173Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:19:04.995677 containerd[1603]: time="2025-09-09T00:19:04.995593182Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.721µs" Sep 9 00:19:04.995677 containerd[1603]: time="2025-09-09T00:19:04.995669125Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:19:04.995832 containerd[1603]: time="2025-09-09T00:19:04.995696967Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:19:04.996048 containerd[1603]: time="2025-09-09T00:19:04.996014963Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:19:04.996048 containerd[1603]: time="2025-09-09T00:19:04.996044378Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:19:04.996112 containerd[1603]: time="2025-09-09T00:19:04.996079534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996200 containerd[1603]: time="2025-09-09T00:19:04.996172679Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996200 containerd[1603]: time="2025-09-09T00:19:04.996193037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996678 containerd[1603]: time="2025-09-09T00:19:04.996648191Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996678 containerd[1603]: time="2025-09-09T00:19:04.996674240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996739 containerd[1603]: time="2025-09-09T00:19:04.996689789Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:19:04.996739 containerd[1603]: time="2025-09-09T00:19:04.996700759Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:19:04.997114 containerd[1603]: time="2025-09-09T00:19:04.997078638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:19:04.997473 containerd[1603]: time="2025-09-09T00:19:04.997450516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:19:04.997508 containerd[1603]: time="2025-09-09T00:19:04.997491452Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:19:04.997536 containerd[1603]: time="2025-09-09T00:19:04.997507913Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:19:04.997586 containerd[1603]: time="2025-09-09T00:19:04.997562966Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:19:04.997956 containerd[1603]: time="2025-09-09T00:19:04.997925697Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:19:04.998042 containerd[1603]: time="2025-09-09T00:19:04.998018591Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:19:05.413088 tar[1592]: linux-amd64/README.md Sep 9 00:19:05.441580 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453244843Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453350932Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453369627Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453387781Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453402308Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453413238Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:19:05.453447 containerd[1603]: time="2025-09-09T00:19:05.453430942Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:19:05.453641 containerd[1603]: time="2025-09-09T00:19:05.453457121Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:19:05.453641 containerd[1603]: time="2025-09-09T00:19:05.453473712Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:19:05.453641 containerd[1603]: time="2025-09-09T00:19:05.453483741Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:19:05.453641 containerd[1603]: time="2025-09-09T00:19:05.453493609Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:19:05.453641 containerd[1603]: time="2025-09-09T00:19:05.453511663Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:19:05.453733 containerd[1603]: time="2025-09-09T00:19:05.453703543Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:19:05.453776 containerd[1603]: time="2025-09-09T00:19:05.453736775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:19:05.453776 containerd[1603]: time="2025-09-09T00:19:05.453769256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:19:05.453812 containerd[1603]: time="2025-09-09T00:19:05.453782280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:19:05.453812 containerd[1603]: time="2025-09-09T00:19:05.453795565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:19:05.453812 containerd[1603]: time="2025-09-09T00:19:05.453807438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:19:05.453879 containerd[1603]: time="2025-09-09T00:19:05.453822526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:19:05.453879 containerd[1603]: time="2025-09-09T00:19:05.453839317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:19:05.453879 containerd[1603]: time="2025-09-09T00:19:05.453853795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:19:05.453879 containerd[1603]: time="2025-09-09T00:19:05.453866328Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:19:05.453879 containerd[1603]: time="2025-09-09T00:19:05.453876918Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:19:05.453978 containerd[1603]: time="2025-09-09T00:19:05.453968229Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:19:05.453998 containerd[1603]: time="2025-09-09T00:19:05.453985482Z" level=info msg="Start snapshots syncer" Sep 9 00:19:05.454032 containerd[1603]: time="2025-09-09T00:19:05.454016951Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:19:05.454335 containerd[1603]: time="2025-09-09T00:19:05.454287167Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:19:05.454443 containerd[1603]: time="2025-09-09T00:19:05.454350456Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:19:05.454466 containerd[1603]: time="2025-09-09T00:19:05.454443100Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:19:05.454574 containerd[1603]: time="2025-09-09T00:19:05.454547025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:19:05.454574 containerd[1603]: time="2025-09-09T00:19:05.454570068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:19:05.454621 containerd[1603]: time="2025-09-09T00:19:05.454582892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:19:05.454621 containerd[1603]: time="2025-09-09T00:19:05.454592450Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:19:05.454621 containerd[1603]: time="2025-09-09T00:19:05.454604332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:19:05.454683 containerd[1603]: time="2025-09-09T00:19:05.454625362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:19:05.454683 containerd[1603]: time="2025-09-09T00:19:05.454635631Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:19:05.454683 containerd[1603]: time="2025-09-09T00:19:05.454661950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:19:05.454683 containerd[1603]: time="2025-09-09T00:19:05.454672039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:19:05.454778 containerd[1603]: time="2025-09-09T00:19:05.454683541Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:19:05.454778 containerd[1603]: time="2025-09-09T00:19:05.454721261Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:19:05.454778 containerd[1603]: time="2025-09-09T00:19:05.454738744Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:19:05.454778 containerd[1603]: time="2025-09-09T00:19:05.454763080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:19:05.454778 containerd[1603]: time="2025-09-09T00:19:05.454773429Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454780883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454790191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454800951Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454819175Z" level=info msg="runtime interface created" Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454824234Z" level=info msg="created NRI interface" Sep 9 00:19:05.454865 containerd[1603]: time="2025-09-09T00:19:05.454860723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:19:05.454980 containerd[1603]: time="2025-09-09T00:19:05.454873006Z" level=info msg="Connect containerd service" Sep 9 00:19:05.454980 containerd[1603]: time="2025-09-09T00:19:05.454901820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:19:05.455825 containerd[1603]: time="2025-09-09T00:19:05.455795857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:19:05.929633 containerd[1603]: time="2025-09-09T00:19:05.929491796Z" level=info msg="Start subscribing containerd event" Sep 9 00:19:05.929873 containerd[1603]: time="2025-09-09T00:19:05.929677043Z" level=info msg="Start recovering state" Sep 9 00:19:05.930141 containerd[1603]: time="2025-09-09T00:19:05.930090458Z" level=info msg="Start event monitor" Sep 9 00:19:05.930141 containerd[1603]: time="2025-09-09T00:19:05.930116517Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:19:05.930298 containerd[1603]: time="2025-09-09T00:19:05.930144119Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:19:05.930298 containerd[1603]: time="2025-09-09T00:19:05.930162934Z" level=info msg="Start streaming server" Sep 9 00:19:05.930298 containerd[1603]: time="2025-09-09T00:19:05.930230822Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:19:05.930298 containerd[1603]: time="2025-09-09T00:19:05.930256089Z" level=info msg="runtime interface starting up..." Sep 9 00:19:05.930298 containerd[1603]: time="2025-09-09T00:19:05.930267510Z" level=info msg="starting plugins..." Sep 9 00:19:05.930521 containerd[1603]: time="2025-09-09T00:19:05.930293219Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:19:05.930521 containerd[1603]: time="2025-09-09T00:19:05.930303438Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:19:05.930878 containerd[1603]: time="2025-09-09T00:19:05.930817943Z" level=info msg="containerd successfully booted in 0.958751s" Sep 9 00:19:05.931008 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:19:05.961120 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 9 00:19:05.966230 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:19:05.968389 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:19:05.972303 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:19:05.976451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:05.980028 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:19:06.021155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:19:06.023497 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:19:06.023864 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:19:06.026956 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:19:06.910367 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:19:06.913122 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:52680.service - OpenSSH per-connection server daemon (10.0.0.1:52680). Sep 9 00:19:07.012352 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 52680 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:07.015841 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:07.024088 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:19:07.026469 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:19:07.035367 systemd-logind[1576]: New session 1 of user core. Sep 9 00:19:07.058961 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:19:07.063481 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:19:07.082164 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:19:07.084925 systemd-logind[1576]: New session c1 of user core. Sep 9 00:19:07.248501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:07.250229 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:19:07.262223 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:07.281851 systemd[1706]: Queued start job for default target default.target. Sep 9 00:19:07.294474 systemd[1706]: Created slice app.slice - User Application Slice. Sep 9 00:19:07.294510 systemd[1706]: Reached target paths.target - Paths. Sep 9 00:19:07.294584 systemd[1706]: Reached target timers.target - Timers. Sep 9 00:19:07.296398 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:19:07.309463 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:19:07.309633 systemd[1706]: Reached target sockets.target - Sockets. Sep 9 00:19:07.309679 systemd[1706]: Reached target basic.target - Basic System. Sep 9 00:19:07.309771 systemd[1706]: Reached target default.target - Main User Target. Sep 9 00:19:07.309814 systemd[1706]: Startup finished in 218ms. Sep 9 00:19:07.310816 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:19:07.313954 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:19:07.315851 systemd[1]: Startup finished in 3.879s (kernel) + 8.266s (initrd) + 5.745s (userspace) = 17.892s. Sep 9 00:19:07.380585 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:52682.service - OpenSSH per-connection server daemon (10.0.0.1:52682). Sep 9 00:19:07.432274 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 52682 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:07.434306 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:07.439271 systemd-logind[1576]: New session 2 of user core. Sep 9 00:19:07.450892 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:19:07.509253 sshd[1734]: Connection closed by 10.0.0.1 port 52682 Sep 9 00:19:07.509771 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:07.522941 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:52682.service: Deactivated successfully. Sep 9 00:19:07.525303 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:19:07.526064 systemd-logind[1576]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:19:07.560522 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:52694.service - OpenSSH per-connection server daemon (10.0.0.1:52694). Sep 9 00:19:07.561262 systemd-logind[1576]: Removed session 2. Sep 9 00:19:07.609545 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 52694 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:07.611138 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:07.616173 systemd-logind[1576]: New session 3 of user core. Sep 9 00:19:07.627884 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:19:07.678818 sshd[1743]: Connection closed by 10.0.0.1 port 52694 Sep 9 00:19:07.679997 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:07.695985 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:52694.service: Deactivated successfully. Sep 9 00:19:07.698653 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:19:07.699457 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:19:07.703804 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:52704.service - OpenSSH per-connection server daemon (10.0.0.1:52704). Sep 9 00:19:07.705845 systemd-logind[1576]: Removed session 3. Sep 9 00:19:07.755920 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 52704 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:07.757917 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:07.765148 systemd-logind[1576]: New session 4 of user core. Sep 9 00:19:07.781245 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:19:07.842319 sshd[1751]: Connection closed by 10.0.0.1 port 52704 Sep 9 00:19:07.842736 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:07.846821 kubelet[1717]: E0909 00:19:07.846766 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:07.852401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:07.852601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:07.852953 systemd[1]: kubelet.service: Consumed 1.645s CPU time, 267.8M memory peak. Sep 9 00:19:07.853392 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:52704.service: Deactivated successfully. Sep 9 00:19:07.855355 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:19:07.857041 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:19:07.860627 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:52718.service - OpenSSH per-connection server daemon (10.0.0.1:52718). Sep 9 00:19:07.861488 systemd-logind[1576]: Removed session 4. Sep 9 00:19:07.918186 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 52718 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:07.919896 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:07.925431 systemd-logind[1576]: New session 5 of user core. Sep 9 00:19:07.935993 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:19:07.996776 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:19:07.997107 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:08.018156 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:08.020059 sshd[1760]: Connection closed by 10.0.0.1 port 52718 Sep 9 00:19:08.020353 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:08.034849 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:52718.service: Deactivated successfully. Sep 9 00:19:08.036858 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:19:08.037630 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:19:08.040912 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:52726.service - OpenSSH per-connection server daemon (10.0.0.1:52726). Sep 9 00:19:08.041472 systemd-logind[1576]: Removed session 5. Sep 9 00:19:08.102476 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 52726 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:08.104100 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:08.109605 systemd-logind[1576]: New session 6 of user core. Sep 9 00:19:08.118929 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:19:08.174964 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:19:08.175297 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:08.225199 sudo[1771]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:08.233552 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:19:08.233915 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:08.246005 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:19:08.303133 augenrules[1793]: No rules Sep 9 00:19:08.305002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:19:08.305297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:19:08.306460 sudo[1770]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:08.308087 sshd[1769]: Connection closed by 10.0.0.1 port 52726 Sep 9 00:19:08.308401 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:08.325302 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:52726.service: Deactivated successfully. Sep 9 00:19:08.327248 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:19:08.327993 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:19:08.331416 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:52740.service - OpenSSH per-connection server daemon (10.0.0.1:52740). Sep 9 00:19:08.332237 systemd-logind[1576]: Removed session 6. Sep 9 00:19:08.384134 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 52740 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:19:08.385503 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:08.389952 systemd-logind[1576]: New session 7 of user core. Sep 9 00:19:08.399898 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:19:08.453442 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:19:08.453821 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:09.287637 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:19:09.302178 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:19:09.771178 dockerd[1825]: time="2025-09-09T00:19:09.771088200Z" level=info msg="Starting up" Sep 9 00:19:09.775458 dockerd[1825]: time="2025-09-09T00:19:09.775403573Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:19:10.777674 dockerd[1825]: time="2025-09-09T00:19:10.777598687Z" level=info msg="Loading containers: start." Sep 9 00:19:10.792951 kernel: Initializing XFRM netlink socket Sep 9 00:19:11.099552 systemd-networkd[1496]: docker0: Link UP Sep 9 00:19:11.106666 dockerd[1825]: time="2025-09-09T00:19:11.106603408Z" level=info msg="Loading containers: done." Sep 9 00:19:11.134811 dockerd[1825]: time="2025-09-09T00:19:11.134744583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:19:11.134983 dockerd[1825]: time="2025-09-09T00:19:11.134870228Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:19:11.135039 dockerd[1825]: time="2025-09-09T00:19:11.135012465Z" level=info msg="Initializing buildkit" Sep 9 00:19:11.340023 dockerd[1825]: time="2025-09-09T00:19:11.339972658Z" level=info msg="Completed buildkit initialization" Sep 9 00:19:11.345731 dockerd[1825]: time="2025-09-09T00:19:11.345658652Z" level=info msg="Daemon has completed initialization" Sep 9 00:19:11.345880 dockerd[1825]: time="2025-09-09T00:19:11.345829803Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:19:11.346111 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:19:12.441977 containerd[1603]: time="2025-09-09T00:19:12.441895066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:19:13.285821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117109886.mount: Deactivated successfully. Sep 9 00:19:15.902777 containerd[1603]: time="2025-09-09T00:19:15.902706583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:15.903453 containerd[1603]: time="2025-09-09T00:19:15.903395505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:19:15.904587 containerd[1603]: time="2025-09-09T00:19:15.904543468Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:15.907478 containerd[1603]: time="2025-09-09T00:19:15.907434610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:15.908494 containerd[1603]: time="2025-09-09T00:19:15.908453421Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.466469508s" Sep 9 00:19:15.908541 containerd[1603]: time="2025-09-09T00:19:15.908496542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:19:15.909112 containerd[1603]: time="2025-09-09T00:19:15.909089133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:19:17.652437 containerd[1603]: time="2025-09-09T00:19:17.652369816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:17.653124 containerd[1603]: time="2025-09-09T00:19:17.653083154Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:19:17.654323 containerd[1603]: time="2025-09-09T00:19:17.654246315Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:17.657424 containerd[1603]: time="2025-09-09T00:19:17.657369332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:17.658356 containerd[1603]: time="2025-09-09T00:19:17.658197596Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.749083185s" Sep 9 00:19:17.658572 containerd[1603]: time="2025-09-09T00:19:17.658539487Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:19:17.659384 containerd[1603]: time="2025-09-09T00:19:17.659279535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:19:17.884931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:17.887273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:18.427938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:18.432412 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:18.760994 kubelet[2103]: E0909 00:19:18.760722 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:18.768179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:18.768467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:18.768996 systemd[1]: kubelet.service: Consumed 310ms CPU time, 110.9M memory peak. Sep 9 00:19:20.343944 containerd[1603]: time="2025-09-09T00:19:20.343837239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.352271 containerd[1603]: time="2025-09-09T00:19:20.352211734Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:19:20.360540 containerd[1603]: time="2025-09-09T00:19:20.360473507Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.374076 containerd[1603]: time="2025-09-09T00:19:20.373978995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.375078 containerd[1603]: time="2025-09-09T00:19:20.375025518Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.715540849s" Sep 9 00:19:20.375078 containerd[1603]: time="2025-09-09T00:19:20.375057508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:19:20.375813 containerd[1603]: time="2025-09-09T00:19:20.375768822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:19:21.618268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899649408.mount: Deactivated successfully. Sep 9 00:19:22.864673 containerd[1603]: time="2025-09-09T00:19:22.864569682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:22.865743 containerd[1603]: time="2025-09-09T00:19:22.865705191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:19:22.867426 containerd[1603]: time="2025-09-09T00:19:22.867349645Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:22.872392 containerd[1603]: time="2025-09-09T00:19:22.872327751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:22.873205 containerd[1603]: time="2025-09-09T00:19:22.873138782Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.497314546s" Sep 9 00:19:22.873270 containerd[1603]: time="2025-09-09T00:19:22.873209174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:19:22.874193 containerd[1603]: time="2025-09-09T00:19:22.873916059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:19:25.005508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412971813.mount: Deactivated successfully. Sep 9 00:19:28.885167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:19:28.888223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:29.945452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:29.967142 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:30.134257 kubelet[2143]: E0909 00:19:30.134201 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:30.139317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:30.139596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:30.140115 systemd[1]: kubelet.service: Consumed 299ms CPU time, 109.3M memory peak. Sep 9 00:19:30.646233 containerd[1603]: time="2025-09-09T00:19:30.646156685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.646922 containerd[1603]: time="2025-09-09T00:19:30.646863661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:19:30.648132 containerd[1603]: time="2025-09-09T00:19:30.648096373Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.651164 containerd[1603]: time="2025-09-09T00:19:30.651128839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.652071 containerd[1603]: time="2025-09-09T00:19:30.652020792Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 7.77806565s" Sep 9 00:19:30.652071 containerd[1603]: time="2025-09-09T00:19:30.652066819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:19:30.652641 containerd[1603]: time="2025-09-09T00:19:30.652614966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:19:31.374165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334053051.mount: Deactivated successfully. Sep 9 00:19:31.388109 containerd[1603]: time="2025-09-09T00:19:31.388055159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:31.390114 containerd[1603]: time="2025-09-09T00:19:31.390047906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:19:31.393119 containerd[1603]: time="2025-09-09T00:19:31.393053774Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:31.395862 containerd[1603]: time="2025-09-09T00:19:31.395781128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:31.396371 containerd[1603]: time="2025-09-09T00:19:31.396336600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 743.691758ms" Sep 9 00:19:31.396422 containerd[1603]: time="2025-09-09T00:19:31.396373098Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:19:31.396931 containerd[1603]: time="2025-09-09T00:19:31.396885600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:19:32.388098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466395828.mount: Deactivated successfully. Sep 9 00:19:38.578171 containerd[1603]: time="2025-09-09T00:19:38.578085858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:38.578862 containerd[1603]: time="2025-09-09T00:19:38.578810847Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:19:38.580575 containerd[1603]: time="2025-09-09T00:19:38.580531980Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:38.584496 containerd[1603]: time="2025-09-09T00:19:38.584461309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:38.585573 containerd[1603]: time="2025-09-09T00:19:38.585512955Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 7.188594467s" Sep 9 00:19:38.585573 containerd[1603]: time="2025-09-09T00:19:38.585556168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:19:40.384905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:19:40.386965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:40.613308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:40.635069 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:40.695037 kubelet[2282]: E0909 00:19:40.694959 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:40.699420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:40.699660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:40.700118 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108.9M memory peak. Sep 9 00:19:42.174314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:42.174484 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108.9M memory peak. Sep 9 00:19:42.176849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:42.205217 systemd[1]: Reload requested from client PID 2297 ('systemctl') (unit session-7.scope)... Sep 9 00:19:42.205245 systemd[1]: Reloading... Sep 9 00:19:42.323867 zram_generator::config[2342]: No configuration found. Sep 9 00:19:43.171145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:43.292989 systemd[1]: Reloading finished in 1087 ms. Sep 9 00:19:43.371717 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:19:43.371869 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:19:43.372228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:43.372292 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. Sep 9 00:19:43.374036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:43.601227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:43.618137 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:43.658958 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:43.658958 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:43.658958 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:43.659401 kubelet[2387]: I0909 00:19:43.659000 2387 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:44.489798 kubelet[2387]: I0909 00:19:44.489719 2387 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:19:44.489798 kubelet[2387]: I0909 00:19:44.489778 2387 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:44.490031 kubelet[2387]: I0909 00:19:44.490013 2387 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:19:44.523366 kubelet[2387]: E0909 00:19:44.523296 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:19:44.526295 kubelet[2387]: I0909 00:19:44.526244 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:44.534085 kubelet[2387]: I0909 00:19:44.534023 2387 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:19:44.540856 kubelet[2387]: I0909 00:19:44.540824 2387 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:44.541229 kubelet[2387]: I0909 00:19:44.541173 2387 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:44.541497 kubelet[2387]: I0909 00:19:44.541207 2387 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:44.541652 kubelet[2387]: I0909 00:19:44.541551 2387 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:44.541652 kubelet[2387]: I0909 00:19:44.541568 2387 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:19:44.543272 kubelet[2387]: I0909 00:19:44.543225 2387 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:44.551073 kubelet[2387]: I0909 00:19:44.551018 2387 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:19:44.551133 kubelet[2387]: I0909 00:19:44.551082 2387 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:44.551164 kubelet[2387]: I0909 00:19:44.551138 2387 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:19:44.551164 kubelet[2387]: I0909 00:19:44.551160 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:44.552087 kubelet[2387]: E0909 00:19:44.552038 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:44.552182 kubelet[2387]: E0909 00:19:44.552061 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:44.556806 kubelet[2387]: I0909 00:19:44.556780 2387 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:19:44.557362 kubelet[2387]: I0909 00:19:44.557324 2387 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:19:44.558389 kubelet[2387]: W0909 00:19:44.558348 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:19:44.561636 kubelet[2387]: I0909 00:19:44.561603 2387 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:44.561705 kubelet[2387]: I0909 00:19:44.561680 2387 server.go:1289] "Started kubelet" Sep 9 00:19:44.565697 kubelet[2387]: I0909 00:19:44.564811 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:44.565697 kubelet[2387]: I0909 00:19:44.565078 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:44.565697 kubelet[2387]: I0909 00:19:44.564804 2387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:44.565697 kubelet[2387]: I0909 00:19:44.565456 2387 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:44.566715 kubelet[2387]: I0909 00:19:44.566682 2387 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:19:44.567295 kubelet[2387]: I0909 00:19:44.567258 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:44.571031 kubelet[2387]: E0909 00:19:44.568826 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863753d062c0ed9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:19:44.561630937 +0000 UTC m=+0.935304890,LastTimestamp:2025-09-09 00:19:44.561630937 +0000 UTC m=+0.935304890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:19:44.571575 kubelet[2387]: E0909 00:19:44.571253 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:44.571575 kubelet[2387]: I0909 00:19:44.571297 2387 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:44.571575 kubelet[2387]: I0909 00:19:44.571449 2387 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:44.571575 kubelet[2387]: I0909 00:19:44.571504 2387 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:44.571825 kubelet[2387]: E0909 00:19:44.571801 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:19:44.571983 kubelet[2387]: I0909 00:19:44.571955 2387 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:19:44.572053 kubelet[2387]: I0909 00:19:44.572034 2387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:44.572380 kubelet[2387]: E0909 00:19:44.572198 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Sep 9 00:19:44.572566 kubelet[2387]: E0909 00:19:44.572541 2387 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:44.573322 kubelet[2387]: I0909 00:19:44.573287 2387 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:19:44.590685 kubelet[2387]: I0909 00:19:44.590634 2387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:44.592150 kubelet[2387]: I0909 00:19:44.592111 2387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:44.592150 kubelet[2387]: I0909 00:19:44.592145 2387 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:19:44.592303 kubelet[2387]: I0909 00:19:44.592164 2387 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:44.592303 kubelet[2387]: I0909 00:19:44.592173 2387 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:19:44.592303 kubelet[2387]: E0909 00:19:44.592223 2387 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:44.593977 kubelet[2387]: E0909 00:19:44.593047 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:44.594253 kubelet[2387]: I0909 00:19:44.594227 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:44.594292 kubelet[2387]: I0909 00:19:44.594275 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:44.594317 kubelet[2387]: I0909 00:19:44.594293 2387 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:44.672318 kubelet[2387]: E0909 00:19:44.672216 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:44.691680 kubelet[2387]: I0909 00:19:44.691638 2387 policy_none.go:49] "None policy: Start" Sep 9 00:19:44.691742 kubelet[2387]: I0909 00:19:44.691684 2387 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:44.691742 kubelet[2387]: I0909 00:19:44.691714 2387 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:44.692521 kubelet[2387]: E0909 00:19:44.692476 2387 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:19:44.758510 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:19:44.772445 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:19:44.772725 kubelet[2387]: E0909 00:19:44.772595 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:44.773537 kubelet[2387]: E0909 00:19:44.773507 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Sep 9 00:19:44.776056 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:19:44.788681 kubelet[2387]: E0909 00:19:44.788641 2387 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:19:44.788961 kubelet[2387]: I0909 00:19:44.788894 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:44.788961 kubelet[2387]: I0909 00:19:44.788912 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:44.789175 kubelet[2387]: I0909 00:19:44.789154 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:44.790357 kubelet[2387]: E0909 00:19:44.790337 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:44.790463 kubelet[2387]: E0909 00:19:44.790433 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:19:44.890693 kubelet[2387]: I0909 00:19:44.890662 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:44.891088 kubelet[2387]: E0909 00:19:44.891061 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 9 00:19:44.974513 kubelet[2387]: I0909 00:19:44.974468 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:44.974513 kubelet[2387]: I0909 00:19:44.974503 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:44.974646 kubelet[2387]: I0909 00:19:44.974522 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:45.050262 systemd[1]: Created slice kubepods-burstable-pod73769ac22e7f81bddd4466b28bd62f62.slice - libcontainer container kubepods-burstable-pod73769ac22e7f81bddd4466b28bd62f62.slice. Sep 9 00:19:45.066005 kubelet[2387]: E0909 00:19:45.065963 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:45.075209 kubelet[2387]: I0909 00:19:45.075179 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.075268 kubelet[2387]: I0909 00:19:45.075250 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.075292 kubelet[2387]: I0909 00:19:45.075268 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.075292 kubelet[2387]: I0909 00:19:45.075287 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.075360 kubelet[2387]: I0909 00:19:45.075301 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.092713 kubelet[2387]: I0909 00:19:45.092675 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:45.093164 kubelet[2387]: E0909 00:19:45.093101 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 9 00:19:45.144318 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:19:45.146187 kubelet[2387]: E0909 00:19:45.146142 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:45.174996 kubelet[2387]: E0909 00:19:45.174946 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Sep 9 00:19:45.176289 kubelet[2387]: I0909 00:19:45.176246 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:45.249592 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:19:45.251441 kubelet[2387]: E0909 00:19:45.251404 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:45.367439 kubelet[2387]: E0909 00:19:45.367284 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:45.368279 containerd[1603]: time="2025-09-09T00:19:45.368237350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73769ac22e7f81bddd4466b28bd62f62,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:45.381383 kubelet[2387]: E0909 00:19:45.381345 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:45.446866 kubelet[2387]: E0909 00:19:45.446823 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:45.447533 containerd[1603]: time="2025-09-09T00:19:45.447471682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:45.495285 kubelet[2387]: I0909 00:19:45.495242 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:45.495746 kubelet[2387]: E0909 00:19:45.495686 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 9 00:19:45.552232 kubelet[2387]: E0909 00:19:45.552167 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:45.552827 containerd[1603]: time="2025-09-09T00:19:45.552742397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:45.557474 kubelet[2387]: E0909 00:19:45.557445 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:45.616246 kubelet[2387]: E0909 00:19:45.616211 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:19:45.823288 kubelet[2387]: E0909 00:19:45.823221 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:45.976054 kubelet[2387]: E0909 00:19:45.976004 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Sep 9 00:19:46.297826 kubelet[2387]: I0909 00:19:46.297784 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:46.298255 kubelet[2387]: E0909 00:19:46.298209 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 9 00:19:46.529395 containerd[1603]: time="2025-09-09T00:19:46.529343493Z" level=info msg="connecting to shim a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4" address="unix:///run/containerd/s/2d9a8d4342160dd570332019fcd006735dbfe3029f8cd710739a8e15e2b986bc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:19:46.553037 kubelet[2387]: E0909 00:19:46.552916 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:19:46.555947 systemd[1]: Started cri-containerd-a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4.scope - libcontainer container a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4. Sep 9 00:19:46.603550 containerd[1603]: time="2025-09-09T00:19:46.603485262Z" level=info msg="connecting to shim 1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427" address="unix:///run/containerd/s/e9ee479987a6d5acce1869ced312759c5e7ddbbc147fc0307c4a02e8f05b6166" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:19:46.639020 containerd[1603]: time="2025-09-09T00:19:46.638961150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73769ac22e7f81bddd4466b28bd62f62,Namespace:kube-system,Attempt:0,} returns sandbox id \"a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4\"" Sep 9 00:19:46.640285 kubelet[2387]: E0909 00:19:46.640243 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:46.644004 systemd[1]: Started cri-containerd-1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427.scope - libcontainer container 1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427. Sep 9 00:19:46.870834 containerd[1603]: time="2025-09-09T00:19:46.870652432Z" level=info msg="CreateContainer within sandbox \"a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:19:46.895147 containerd[1603]: time="2025-09-09T00:19:46.895077933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427\"" Sep 9 00:19:46.896059 kubelet[2387]: E0909 00:19:46.896009 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.013514 kubelet[2387]: E0909 00:19:47.013452 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:47.145429 containerd[1603]: time="2025-09-09T00:19:47.145358851Z" level=info msg="CreateContainer within sandbox \"1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:19:47.414692 containerd[1603]: time="2025-09-09T00:19:47.414577346Z" level=info msg="connecting to shim 9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b" address="unix:///run/containerd/s/1dca751ecfbb341031e9a8eec9f4a0120f8850683381b137ac2ed517ae67d962" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:19:47.473951 systemd[1]: Started cri-containerd-9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b.scope - libcontainer container 9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b. Sep 9 00:19:47.576937 kubelet[2387]: E0909 00:19:47.576846 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="3.2s" Sep 9 00:19:47.671497 kubelet[2387]: E0909 00:19:47.671358 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:47.747229 containerd[1603]: time="2025-09-09T00:19:47.747174945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b\"" Sep 9 00:19:47.748033 kubelet[2387]: E0909 00:19:47.747990 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.899576 kubelet[2387]: I0909 00:19:47.899526 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:47.900113 kubelet[2387]: E0909 00:19:47.899859 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 9 00:19:47.951378 kubelet[2387]: E0909 00:19:47.951238 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:47.957253 containerd[1603]: time="2025-09-09T00:19:47.957199807Z" level=info msg="Container d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:48.113434 containerd[1603]: time="2025-09-09T00:19:48.113385321Z" level=info msg="CreateContainer within sandbox \"9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:19:48.157161 kubelet[2387]: E0909 00:19:48.157116 2387 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:19:48.271640 containerd[1603]: time="2025-09-09T00:19:48.271530354Z" level=info msg="Container 446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:49.006073 containerd[1603]: time="2025-09-09T00:19:49.006022055Z" level=info msg="CreateContainer within sandbox \"1fb40dcddb41597fd656a157294a5c609b3db1219135aa39ff727f52a29aa427\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef\"" Sep 9 00:19:49.006877 containerd[1603]: time="2025-09-09T00:19:49.006838409Z" level=info msg="StartContainer for \"d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef\"" Sep 9 00:19:49.008282 containerd[1603]: time="2025-09-09T00:19:49.008247965Z" level=info msg="connecting to shim d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef" address="unix:///run/containerd/s/e9ee479987a6d5acce1869ced312759c5e7ddbbc147fc0307c4a02e8f05b6166" protocol=ttrpc version=3 Sep 9 00:19:49.035951 systemd[1]: Started cri-containerd-d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef.scope - libcontainer container d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef. Sep 9 00:19:49.424261 update_engine[1579]: I20250909 00:19:49.424171 1579 update_attempter.cc:509] Updating boot flags... Sep 9 00:19:49.846362 containerd[1603]: time="2025-09-09T00:19:49.845922015Z" level=info msg="StartContainer for \"d40edf4ef7d3d95e0a9b4124b558c864ac8f04a131b17a66e574a0cdffde29ef\" returns successfully" Sep 9 00:19:49.891261 containerd[1603]: time="2025-09-09T00:19:49.891204480Z" level=info msg="CreateContainer within sandbox \"a30ce1e83942e90f1f542b915bb5c4ff1a32e93a81bee2040470081a837884a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a\"" Sep 9 00:19:49.891917 containerd[1603]: time="2025-09-09T00:19:49.891893622Z" level=info msg="StartContainer for \"446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a\"" Sep 9 00:19:49.893030 containerd[1603]: time="2025-09-09T00:19:49.893009197Z" level=info msg="connecting to shim 446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a" address="unix:///run/containerd/s/2d9a8d4342160dd570332019fcd006735dbfe3029f8cd710739a8e15e2b986bc" protocol=ttrpc version=3 Sep 9 00:19:49.905799 containerd[1603]: time="2025-09-09T00:19:49.901031475Z" level=info msg="Container 680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:49.910496 containerd[1603]: time="2025-09-09T00:19:49.910457840Z" level=info msg="CreateContainer within sandbox \"9185eb731b93fb23fd140e2d5199ddd062fb3aad0b7736e4d130cce9c2dfee6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f\"" Sep 9 00:19:49.913766 containerd[1603]: time="2025-09-09T00:19:49.910947539Z" level=info msg="StartContainer for \"680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f\"" Sep 9 00:19:49.915212 containerd[1603]: time="2025-09-09T00:19:49.914870166Z" level=info msg="connecting to shim 680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f" address="unix:///run/containerd/s/1dca751ecfbb341031e9a8eec9f4a0120f8850683381b137ac2ed517ae67d962" protocol=ttrpc version=3 Sep 9 00:19:49.966944 systemd[1]: Started cri-containerd-446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a.scope - libcontainer container 446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a. Sep 9 00:19:50.020968 systemd[1]: Started cri-containerd-680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f.scope - libcontainer container 680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f. Sep 9 00:19:50.102945 containerd[1603]: time="2025-09-09T00:19:50.102603192Z" level=info msg="StartContainer for \"446d78ee76bb1191ce522f38411377e49856065691355a5e5e8fc0607d5b6f7a\" returns successfully" Sep 9 00:19:50.180275 containerd[1603]: time="2025-09-09T00:19:50.180222174Z" level=info msg="StartContainer for \"680fd790bca6ab7a101f20dc08097a70506801693287759508755591eb2ab66f\" returns successfully" Sep 9 00:19:50.854720 kubelet[2387]: E0909 00:19:50.853489 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:50.854720 kubelet[2387]: E0909 00:19:50.853796 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:50.857177 kubelet[2387]: E0909 00:19:50.857159 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:50.857367 kubelet[2387]: E0909 00:19:50.857353 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:50.857852 kubelet[2387]: E0909 00:19:50.857723 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:50.858022 kubelet[2387]: E0909 00:19:50.857914 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.103454 kubelet[2387]: I0909 00:19:51.102147 2387 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:51.864592 kubelet[2387]: E0909 00:19:51.864545 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:51.867439 kubelet[2387]: E0909 00:19:51.864668 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.868134 kubelet[2387]: E0909 00:19:51.868033 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:19:51.868626 kubelet[2387]: E0909 00:19:51.868599 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:51.868804 kubelet[2387]: E0909 00:19:51.868725 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.868804 kubelet[2387]: E0909 00:19:51.868598 2387 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:51.868927 kubelet[2387]: E0909 00:19:51.868902 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.959137 kubelet[2387]: I0909 00:19:51.959069 2387 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:51.959137 kubelet[2387]: E0909 00:19:51.959124 2387 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:19:51.972814 kubelet[2387]: I0909 00:19:51.972621 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:51.984352 kubelet[2387]: E0909 00:19:51.984002 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:51.984352 kubelet[2387]: I0909 00:19:51.984057 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:51.989329 kubelet[2387]: E0909 00:19:51.989274 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:51.989329 kubelet[2387]: I0909 00:19:51.989308 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:51.991602 kubelet[2387]: E0909 00:19:51.991552 2387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:52.556483 kubelet[2387]: I0909 00:19:52.556413 2387 apiserver.go:52] "Watching apiserver" Sep 9 00:19:52.572518 kubelet[2387]: I0909 00:19:52.572453 2387 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:52.863341 kubelet[2387]: I0909 00:19:52.863198 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:52.863341 kubelet[2387]: I0909 00:19:52.863248 2387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:52.868609 kubelet[2387]: E0909 00:19:52.868582 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:52.869990 kubelet[2387]: E0909 00:19:52.869779 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:53.864868 kubelet[2387]: E0909 00:19:53.864825 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:53.865010 kubelet[2387]: E0909 00:19:53.864953 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:53.949849 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-7.scope)... Sep 9 00:19:53.949867 systemd[1]: Reloading... Sep 9 00:19:54.043788 zram_generator::config[2738]: No configuration found. Sep 9 00:19:54.131502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:54.278094 systemd[1]: Reloading finished in 327 ms. Sep 9 00:19:54.304514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:54.328728 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:19:54.329100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:54.329164 systemd[1]: kubelet.service: Consumed 1.408s CPU time, 135.5M memory peak. Sep 9 00:19:54.331444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:54.558719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:54.567424 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:54.618696 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:54.618696 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:54.618696 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:54.619176 kubelet[2777]: I0909 00:19:54.618739 2777 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:54.627102 kubelet[2777]: I0909 00:19:54.627050 2777 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:19:54.627102 kubelet[2777]: I0909 00:19:54.627078 2777 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:54.627283 kubelet[2777]: I0909 00:19:54.627267 2777 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:19:54.628480 kubelet[2777]: I0909 00:19:54.628448 2777 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:19:54.630693 kubelet[2777]: I0909 00:19:54.630656 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:54.634702 kubelet[2777]: I0909 00:19:54.634655 2777 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:19:54.641877 kubelet[2777]: I0909 00:19:54.641829 2777 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:54.642180 kubelet[2777]: I0909 00:19:54.642121 2777 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:54.642425 kubelet[2777]: I0909 00:19:54.642164 2777 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:54.642530 kubelet[2777]: I0909 00:19:54.642425 2777 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:54.642530 kubelet[2777]: I0909 00:19:54.642439 2777 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:19:54.642530 kubelet[2777]: I0909 00:19:54.642511 2777 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:54.642769 kubelet[2777]: I0909 00:19:54.642735 2777 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:19:54.642824 kubelet[2777]: I0909 00:19:54.642773 2777 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:54.642824 kubelet[2777]: I0909 00:19:54.642804 2777 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:19:54.644293 kubelet[2777]: I0909 00:19:54.644251 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:54.646850 kubelet[2777]: I0909 00:19:54.646804 2777 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:19:54.647873 kubelet[2777]: I0909 00:19:54.647842 2777 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:19:54.654069 kubelet[2777]: I0909 00:19:54.654041 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:54.654124 kubelet[2777]: I0909 00:19:54.654111 2777 server.go:1289] "Started kubelet" Sep 9 00:19:54.654953 kubelet[2777]: I0909 00:19:54.654905 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:54.655895 kubelet[2777]: I0909 00:19:54.655835 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:54.656045 kubelet[2777]: I0909 00:19:54.656029 2777 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:19:54.656297 kubelet[2777]: I0909 00:19:54.656268 2777 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:54.657805 kubelet[2777]: I0909 00:19:54.657772 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:54.660189 kubelet[2777]: I0909 00:19:54.658951 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:54.662825 kubelet[2777]: E0909 00:19:54.662807 2777 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:54.662902 kubelet[2777]: I0909 00:19:54.662892 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:54.663124 kubelet[2777]: I0909 00:19:54.663109 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:54.663358 kubelet[2777]: I0909 00:19:54.663329 2777 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:54.665739 kubelet[2777]: I0909 00:19:54.665722 2777 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:19:54.666066 kubelet[2777]: I0909 00:19:54.666041 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:54.666800 kubelet[2777]: E0909 00:19:54.666783 2777 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:54.667131 kubelet[2777]: I0909 00:19:54.667116 2777 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:19:54.685719 kubelet[2777]: I0909 00:19:54.685659 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:54.687904 kubelet[2777]: I0909 00:19:54.687863 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:54.687904 kubelet[2777]: I0909 00:19:54.687907 2777 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:19:54.688170 kubelet[2777]: I0909 00:19:54.687938 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:54.688170 kubelet[2777]: I0909 00:19:54.687949 2777 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:19:54.688170 kubelet[2777]: E0909 00:19:54.688024 2777 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715473 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715492 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715511 2777 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715629 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715638 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715655 2777 policy_none.go:49] "None policy: Start" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715664 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:54.715780 kubelet[2777]: I0909 00:19:54.715673 2777 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:54.716098 kubelet[2777]: I0909 00:19:54.716084 2777 state_mem.go:75] "Updated machine memory state" Sep 9 00:19:54.722053 kubelet[2777]: E0909 00:19:54.722007 2777 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:19:54.722260 kubelet[2777]: I0909 00:19:54.722235 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:54.722303 kubelet[2777]: I0909 00:19:54.722254 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:54.722552 kubelet[2777]: I0909 00:19:54.722521 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:54.724572 kubelet[2777]: E0909 00:19:54.724212 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:54.789697 kubelet[2777]: I0909 00:19:54.789654 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:54.789893 kubelet[2777]: I0909 00:19:54.789827 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:54.790086 kubelet[2777]: I0909 00:19:54.789654 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:54.796441 kubelet[2777]: E0909 00:19:54.796384 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:54.796628 kubelet[2777]: E0909 00:19:54.796496 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:54.827711 kubelet[2777]: I0909 00:19:54.827573 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:54.835464 kubelet[2777]: I0909 00:19:54.835421 2777 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:19:54.835643 kubelet[2777]: I0909 00:19:54.835513 2777 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:54.950606 sudo[2817]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:19:54.950998 sudo[2817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:19:54.964375 kubelet[2777]: I0909 00:19:54.964276 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:54.964375 kubelet[2777]: I0909 00:19:54.964352 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:54.964375 kubelet[2777]: I0909 00:19:54.964387 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:54.964622 kubelet[2777]: I0909 00:19:54.964408 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73769ac22e7f81bddd4466b28bd62f62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73769ac22e7f81bddd4466b28bd62f62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:54.964622 kubelet[2777]: I0909 00:19:54.964435 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:54.964622 kubelet[2777]: I0909 00:19:54.964456 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:54.964622 kubelet[2777]: I0909 00:19:54.964474 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:54.964622 kubelet[2777]: I0909 00:19:54.964492 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:54.964836 kubelet[2777]: I0909 00:19:54.964512 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:55.097120 kubelet[2777]: E0909 00:19:55.096995 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.097120 kubelet[2777]: E0909 00:19:55.097014 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.098213 kubelet[2777]: E0909 00:19:55.098154 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.645366 kubelet[2777]: I0909 00:19:55.645322 2777 apiserver.go:52] "Watching apiserver" Sep 9 00:19:55.663254 kubelet[2777]: I0909 00:19:55.663223 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:55.703363 kubelet[2777]: E0909 00:19:55.703318 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.703541 kubelet[2777]: I0909 00:19:55.703412 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:55.703841 kubelet[2777]: E0909 00:19:55.703814 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.801506 kubelet[2777]: E0909 00:19:55.801246 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:55.801506 kubelet[2777]: E0909 00:19:55.801505 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:56.126655 kubelet[2777]: I0909 00:19:56.126395 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.126364034 podStartE2EDuration="2.126364034s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:56.126296827 +0000 UTC m=+1.553206813" watchObservedRunningTime="2025-09-09 00:19:56.126364034 +0000 UTC m=+1.553274020" Sep 9 00:19:56.126655 kubelet[2777]: I0909 00:19:56.126501 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.126496916 podStartE2EDuration="4.126496916s" podCreationTimestamp="2025-09-09 00:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:55.897396698 +0000 UTC m=+1.324306704" watchObservedRunningTime="2025-09-09 00:19:56.126496916 +0000 UTC m=+1.553406902" Sep 9 00:19:56.141805 sudo[2817]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:56.174063 kubelet[2777]: I0909 00:19:56.173241 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.173223509 podStartE2EDuration="4.173223509s" podCreationTimestamp="2025-09-09 00:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:56.156883607 +0000 UTC m=+1.583793593" watchObservedRunningTime="2025-09-09 00:19:56.173223509 +0000 UTC m=+1.600133496" Sep 9 00:19:56.704985 kubelet[2777]: E0909 00:19:56.704938 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:56.705392 kubelet[2777]: E0909 00:19:56.705025 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:57.897342 sudo[1805]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:57.899317 sshd[1804]: Connection closed by 10.0.0.1 port 52740 Sep 9 00:19:57.900501 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:57.906009 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:52740.service: Deactivated successfully. Sep 9 00:19:57.908274 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:19:57.908499 systemd[1]: session-7.scope: Consumed 6.154s CPU time, 258.6M memory peak. Sep 9 00:19:57.909792 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:19:57.911263 systemd-logind[1576]: Removed session 7. Sep 9 00:19:58.629468 kubelet[2777]: E0909 00:19:58.629421 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:58.967055 kubelet[2777]: E0909 00:19:58.966994 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:59.154984 kubelet[2777]: I0909 00:19:59.154940 2777 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:19:59.155417 containerd[1603]: time="2025-09-09T00:19:59.155373187Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:19:59.155805 kubelet[2777]: I0909 00:19:59.155576 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:20:01.252338 systemd[1]: Created slice kubepods-besteffort-pod9d67e109_dd60_4d6a_9ca2_d8c616a12a02.slice - libcontainer container kubepods-besteffort-pod9d67e109_dd60_4d6a_9ca2_d8c616a12a02.slice. Sep 9 00:20:01.273634 systemd[1]: Created slice kubepods-burstable-podbb495432_3f3e_471a_aee5_8891ac5e77bb.slice - libcontainer container kubepods-burstable-podbb495432_3f3e_471a_aee5_8891ac5e77bb.slice. Sep 9 00:20:01.285964 systemd[1]: Created slice kubepods-besteffort-pod015e2446_31b6_4421_ba58_c4443ade1e79.slice - libcontainer container kubepods-besteffort-pod015e2446_31b6_4421_ba58_c4443ade1e79.slice. Sep 9 00:20:01.306313 kubelet[2777]: I0909 00:20:01.306252 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d67e109-dd60-4d6a-9ca2-d8c616a12a02-lib-modules\") pod \"kube-proxy-7prs7\" (UID: \"9d67e109-dd60-4d6a-9ca2-d8c616a12a02\") " pod="kube-system/kube-proxy-7prs7" Sep 9 00:20:01.306910 kubelet[2777]: I0909 00:20:01.306392 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-config-path\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.306910 kubelet[2777]: I0909 00:20:01.306524 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-kernel\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.306910 kubelet[2777]: I0909 00:20:01.306607 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cni-path\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.306910 kubelet[2777]: I0909 00:20:01.306631 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-xtables-lock\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.307044 kubelet[2777]: I0909 00:20:01.307003 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgk7\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-kube-api-access-bmgk7\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.307140 kubelet[2777]: I0909 00:20:01.307116 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015e2446-31b6-4421-ba58-c4443ade1e79-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5wqjd\" (UID: \"015e2446-31b6-4421-ba58-c4443ade1e79\") " pod="kube-system/cilium-operator-6c4d7847fc-5wqjd" Sep 9 00:20:01.308062 kubelet[2777]: I0909 00:20:01.307700 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84hf\" (UniqueName: \"kubernetes.io/projected/9d67e109-dd60-4d6a-9ca2-d8c616a12a02-kube-api-access-t84hf\") pod \"kube-proxy-7prs7\" (UID: \"9d67e109-dd60-4d6a-9ca2-d8c616a12a02\") " pod="kube-system/kube-proxy-7prs7" Sep 9 00:20:01.308300 kubelet[2777]: I0909 00:20:01.308272 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-hostproc\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.308452 kubelet[2777]: I0909 00:20:01.308306 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-cgroup\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.308511 kubelet[2777]: I0909 00:20:01.308462 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-lib-modules\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.308708 kubelet[2777]: I0909 00:20:01.308686 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzbv\" (UniqueName: \"kubernetes.io/projected/015e2446-31b6-4421-ba58-c4443ade1e79-kube-api-access-qbzbv\") pod \"cilium-operator-6c4d7847fc-5wqjd\" (UID: \"015e2446-31b6-4421-ba58-c4443ade1e79\") " pod="kube-system/cilium-operator-6c4d7847fc-5wqjd" Sep 9 00:20:01.308863 kubelet[2777]: I0909 00:20:01.308720 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-etc-cni-netd\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.309036 kubelet[2777]: I0909 00:20:01.308876 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-run\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.309036 kubelet[2777]: I0909 00:20:01.308898 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-bpf-maps\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.309036 kubelet[2777]: I0909 00:20:01.309034 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-hubble-tls\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.309234 kubelet[2777]: I0909 00:20:01.309057 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d67e109-dd60-4d6a-9ca2-d8c616a12a02-kube-proxy\") pod \"kube-proxy-7prs7\" (UID: \"9d67e109-dd60-4d6a-9ca2-d8c616a12a02\") " pod="kube-system/kube-proxy-7prs7" Sep 9 00:20:01.311778 kubelet[2777]: I0909 00:20:01.311383 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d67e109-dd60-4d6a-9ca2-d8c616a12a02-xtables-lock\") pod \"kube-proxy-7prs7\" (UID: \"9d67e109-dd60-4d6a-9ca2-d8c616a12a02\") " pod="kube-system/kube-proxy-7prs7" Sep 9 00:20:01.311778 kubelet[2777]: I0909 00:20:01.311433 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb495432-3f3e-471a-aee5-8891ac5e77bb-clustermesh-secrets\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.311778 kubelet[2777]: I0909 00:20:01.311455 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-net\") pod \"cilium-22t8g\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " pod="kube-system/cilium-22t8g" Sep 9 00:20:01.571476 kubelet[2777]: E0909 00:20:01.571323 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.572197 containerd[1603]: time="2025-09-09T00:20:01.572110582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7prs7,Uid:9d67e109-dd60-4d6a-9ca2-d8c616a12a02,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:01.581774 kubelet[2777]: E0909 00:20:01.581701 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.582215 containerd[1603]: time="2025-09-09T00:20:01.582172686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22t8g,Uid:bb495432-3f3e-471a-aee5-8891ac5e77bb,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:01.590900 kubelet[2777]: E0909 00:20:01.590853 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.591492 containerd[1603]: time="2025-09-09T00:20:01.591449101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5wqjd,Uid:015e2446-31b6-4421-ba58-c4443ade1e79,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:01.706092 containerd[1603]: time="2025-09-09T00:20:01.705522375Z" level=info msg="connecting to shim 9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379" address="unix:///run/containerd/s/7bc97f6d243b1e895a5438bf9be2fa6ace3fea6ed2fb09b295b631f43a466a46" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:20:01.712617 containerd[1603]: time="2025-09-09T00:20:01.712575349Z" level=info msg="connecting to shim 96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:20:01.719389 containerd[1603]: time="2025-09-09T00:20:01.719349517Z" level=info msg="connecting to shim c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8" address="unix:///run/containerd/s/5b6dab7a013a8d55aafdc0fc608d9f21047c01c99b9f62d807aa75ed00a7febc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:20:01.765026 systemd[1]: Started cri-containerd-9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379.scope - libcontainer container 9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379. Sep 9 00:20:01.772099 systemd[1]: Started cri-containerd-96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe.scope - libcontainer container 96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe. Sep 9 00:20:01.773928 systemd[1]: Started cri-containerd-c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8.scope - libcontainer container c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8. Sep 9 00:20:01.811489 containerd[1603]: time="2025-09-09T00:20:01.811442587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22t8g,Uid:bb495432-3f3e-471a-aee5-8891ac5e77bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\"" Sep 9 00:20:01.812497 kubelet[2777]: E0909 00:20:01.812468 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.815459 containerd[1603]: time="2025-09-09T00:20:01.815410465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7prs7,Uid:9d67e109-dd60-4d6a-9ca2-d8c616a12a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8\"" Sep 9 00:20:01.815791 containerd[1603]: time="2025-09-09T00:20:01.815765839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:20:01.817540 kubelet[2777]: E0909 00:20:01.817513 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.817840 containerd[1603]: time="2025-09-09T00:20:01.817780808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5wqjd,Uid:015e2446-31b6-4421-ba58-c4443ade1e79,Namespace:kube-system,Attempt:0,} returns sandbox id \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\"" Sep 9 00:20:01.819257 kubelet[2777]: E0909 00:20:01.819229 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.823967 containerd[1603]: time="2025-09-09T00:20:01.823871758Z" level=info msg="CreateContainer within sandbox \"c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:20:01.837313 containerd[1603]: time="2025-09-09T00:20:01.837247906Z" level=info msg="Container 1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:01.848121 containerd[1603]: time="2025-09-09T00:20:01.848068510Z" level=info msg="CreateContainer within sandbox \"c13b54758e9a32dd62862bd77d142c35c68984a6e23f8cefc4302e2110fb66f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934\"" Sep 9 00:20:01.848730 containerd[1603]: time="2025-09-09T00:20:01.848688766Z" level=info msg="StartContainer for \"1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934\"" Sep 9 00:20:01.850905 containerd[1603]: time="2025-09-09T00:20:01.850876122Z" level=info msg="connecting to shim 1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934" address="unix:///run/containerd/s/5b6dab7a013a8d55aafdc0fc608d9f21047c01c99b9f62d807aa75ed00a7febc" protocol=ttrpc version=3 Sep 9 00:20:01.880996 systemd[1]: Started cri-containerd-1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934.scope - libcontainer container 1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934. Sep 9 00:20:02.045590 containerd[1603]: time="2025-09-09T00:20:02.045544693Z" level=info msg="StartContainer for \"1225fa91add8e011c660b4fb38344d354f45c9b8bfbeb4df6068fa3a361fe934\" returns successfully" Sep 9 00:20:02.723922 kubelet[2777]: E0909 00:20:02.723886 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:02.821681 kubelet[2777]: E0909 00:20:02.821580 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:02.887412 kubelet[2777]: I0909 00:20:02.887322 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7prs7" podStartSLOduration=2.8873023780000002 podStartE2EDuration="2.887302378s" podCreationTimestamp="2025-09-09 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:02.766136197 +0000 UTC m=+8.193046193" watchObservedRunningTime="2025-09-09 00:20:02.887302378 +0000 UTC m=+8.314212364" Sep 9 00:20:03.725888 kubelet[2777]: E0909 00:20:03.725846 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:08.635699 kubelet[2777]: E0909 00:20:08.635658 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:08.972201 kubelet[2777]: E0909 00:20:08.972034 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:09.735039 kubelet[2777]: E0909 00:20:09.734987 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:14.707902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056843047.mount: Deactivated successfully. Sep 9 00:20:18.305177 containerd[1603]: time="2025-09-09T00:20:18.305089516Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:18.330338 containerd[1603]: time="2025-09-09T00:20:18.330251153Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:20:18.401729 containerd[1603]: time="2025-09-09T00:20:18.401636327Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:18.403769 containerd[1603]: time="2025-09-09T00:20:18.403704741Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.587897708s" Sep 9 00:20:18.403769 containerd[1603]: time="2025-09-09T00:20:18.403745958Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:20:18.405045 containerd[1603]: time="2025-09-09T00:20:18.404896539Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:20:18.508605 containerd[1603]: time="2025-09-09T00:20:18.508546106Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:20:18.521676 containerd[1603]: time="2025-09-09T00:20:18.521616768Z" level=info msg="Container 29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:18.525683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934205173.mount: Deactivated successfully. Sep 9 00:20:18.529905 containerd[1603]: time="2025-09-09T00:20:18.529858485Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\"" Sep 9 00:20:18.530730 containerd[1603]: time="2025-09-09T00:20:18.530683146Z" level=info msg="StartContainer for \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\"" Sep 9 00:20:18.531703 containerd[1603]: time="2025-09-09T00:20:18.531673342Z" level=info msg="connecting to shim 29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" protocol=ttrpc version=3 Sep 9 00:20:18.555953 systemd[1]: Started cri-containerd-29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b.scope - libcontainer container 29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b. Sep 9 00:20:18.592775 containerd[1603]: time="2025-09-09T00:20:18.592703421Z" level=info msg="StartContainer for \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" returns successfully" Sep 9 00:20:18.604956 systemd[1]: cri-containerd-29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b.scope: Deactivated successfully. Sep 9 00:20:18.608149 containerd[1603]: time="2025-09-09T00:20:18.608106653Z" level=info msg="received exit event container_id:\"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" id:\"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" pid:3207 exited_at:{seconds:1757377218 nanos:607586905}" Sep 9 00:20:18.608279 containerd[1603]: time="2025-09-09T00:20:18.608238216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" id:\"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" pid:3207 exited_at:{seconds:1757377218 nanos:607586905}" Sep 9 00:20:18.630378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b-rootfs.mount: Deactivated successfully. Sep 9 00:20:19.225999 kubelet[2777]: E0909 00:20:19.225950 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:19.233295 containerd[1603]: time="2025-09-09T00:20:19.233244098Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:20:19.255275 containerd[1603]: time="2025-09-09T00:20:19.255216668Z" level=info msg="Container 45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:19.267708 containerd[1603]: time="2025-09-09T00:20:19.267647505Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\"" Sep 9 00:20:19.268448 containerd[1603]: time="2025-09-09T00:20:19.268286504Z" level=info msg="StartContainer for \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\"" Sep 9 00:20:19.269285 containerd[1603]: time="2025-09-09T00:20:19.269257837Z" level=info msg="connecting to shim 45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" protocol=ttrpc version=3 Sep 9 00:20:19.294140 systemd[1]: Started cri-containerd-45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b.scope - libcontainer container 45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b. Sep 9 00:20:19.327664 containerd[1603]: time="2025-09-09T00:20:19.327617946Z" level=info msg="StartContainer for \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" returns successfully" Sep 9 00:20:19.348530 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:20:19.349138 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:19.349943 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:19.352328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:19.362950 systemd[1]: cri-containerd-45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b.scope: Deactivated successfully. Sep 9 00:20:19.364118 containerd[1603]: time="2025-09-09T00:20:19.364067870Z" level=info msg="received exit event container_id:\"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" id:\"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" pid:3255 exited_at:{seconds:1757377219 nanos:363664276}" Sep 9 00:20:19.364297 containerd[1603]: time="2025-09-09T00:20:19.364242383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" id:\"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" pid:3255 exited_at:{seconds:1757377219 nanos:363664276}" Sep 9 00:20:19.371281 systemd[1]: cri-containerd-45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b.scope: Consumed 29ms CPU time, 7.4M memory peak, 2.2M written to disk. Sep 9 00:20:19.388684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:20.231825 kubelet[2777]: E0909 00:20:20.231783 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:20.247784 containerd[1603]: time="2025-09-09T00:20:20.247704862Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:20:20.273939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279514287.mount: Deactivated successfully. Sep 9 00:20:20.293324 containerd[1603]: time="2025-09-09T00:20:20.292657765Z" level=info msg="Container ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:20.297869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936883178.mount: Deactivated successfully. Sep 9 00:20:20.305699 containerd[1603]: time="2025-09-09T00:20:20.305648830Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\"" Sep 9 00:20:20.307153 containerd[1603]: time="2025-09-09T00:20:20.307128485Z" level=info msg="StartContainer for \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\"" Sep 9 00:20:20.309549 containerd[1603]: time="2025-09-09T00:20:20.309508492Z" level=info msg="connecting to shim ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" protocol=ttrpc version=3 Sep 9 00:20:20.335936 systemd[1]: Started cri-containerd-ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52.scope - libcontainer container ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52. Sep 9 00:20:20.386714 systemd[1]: cri-containerd-ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52.scope: Deactivated successfully. Sep 9 00:20:20.387632 containerd[1603]: time="2025-09-09T00:20:20.387579310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" id:\"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" pid:3310 exited_at:{seconds:1757377220 nanos:387333986}" Sep 9 00:20:20.424633 containerd[1603]: time="2025-09-09T00:20:20.424556575Z" level=info msg="received exit event container_id:\"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" id:\"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" pid:3310 exited_at:{seconds:1757377220 nanos:387333986}" Sep 9 00:20:20.427580 containerd[1603]: time="2025-09-09T00:20:20.427533036Z" level=info msg="StartContainer for \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" returns successfully" Sep 9 00:20:21.202264 containerd[1603]: time="2025-09-09T00:20:21.202105619Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:21.207774 containerd[1603]: time="2025-09-09T00:20:21.207717128Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:20:21.209951 containerd[1603]: time="2025-09-09T00:20:21.209872735Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:21.211346 containerd[1603]: time="2025-09-09T00:20:21.211268276Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.806338755s" Sep 9 00:20:21.211346 containerd[1603]: time="2025-09-09T00:20:21.211324469Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:20:21.228059 containerd[1603]: time="2025-09-09T00:20:21.227999661Z" level=info msg="CreateContainer within sandbox \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:20:21.236438 kubelet[2777]: E0909 00:20:21.236395 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:21.246502 containerd[1603]: time="2025-09-09T00:20:21.246439285Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:20:21.249947 containerd[1603]: time="2025-09-09T00:20:21.249900115Z" level=info msg="Container f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:21.275087 containerd[1603]: time="2025-09-09T00:20:21.275023801Z" level=info msg="Container 348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:21.285865 containerd[1603]: time="2025-09-09T00:20:21.285792508Z" level=info msg="CreateContainer within sandbox \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\"" Sep 9 00:20:21.286556 containerd[1603]: time="2025-09-09T00:20:21.286512350Z" level=info msg="StartContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\"" Sep 9 00:20:21.287435 containerd[1603]: time="2025-09-09T00:20:21.287401625Z" level=info msg="connecting to shim f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e" address="unix:///run/containerd/s/7bc97f6d243b1e895a5438bf9be2fa6ace3fea6ed2fb09b295b631f43a466a46" protocol=ttrpc version=3 Sep 9 00:20:21.301939 containerd[1603]: time="2025-09-09T00:20:21.301888619Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\"" Sep 9 00:20:21.303623 containerd[1603]: time="2025-09-09T00:20:21.303245998Z" level=info msg="StartContainer for \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\"" Sep 9 00:20:21.306632 containerd[1603]: time="2025-09-09T00:20:21.306596907Z" level=info msg="connecting to shim 348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" protocol=ttrpc version=3 Sep 9 00:20:21.309947 systemd[1]: Started cri-containerd-f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e.scope - libcontainer container f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e. Sep 9 00:20:21.339947 systemd[1]: Started cri-containerd-348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd.scope - libcontainer container 348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd. Sep 9 00:20:21.388467 systemd[1]: cri-containerd-348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd.scope: Deactivated successfully. Sep 9 00:20:21.389202 containerd[1603]: time="2025-09-09T00:20:21.388787889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" id:\"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" pid:3376 exited_at:{seconds:1757377221 nanos:388503693}" Sep 9 00:20:21.650106 containerd[1603]: time="2025-09-09T00:20:21.649992037Z" level=info msg="received exit event container_id:\"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" id:\"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" pid:3376 exited_at:{seconds:1757377221 nanos:388503693}" Sep 9 00:20:21.651347 containerd[1603]: time="2025-09-09T00:20:21.651314112Z" level=info msg="StartContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" returns successfully" Sep 9 00:20:21.651670 containerd[1603]: time="2025-09-09T00:20:21.651645384Z" level=info msg="StartContainer for \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" returns successfully" Sep 9 00:20:21.683037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd-rootfs.mount: Deactivated successfully. Sep 9 00:20:22.240353 kubelet[2777]: E0909 00:20:22.240308 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:22.245137 kubelet[2777]: E0909 00:20:22.245090 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:22.358018 containerd[1603]: time="2025-09-09T00:20:22.357962966Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:20:22.534184 kubelet[2777]: I0909 00:20:22.533824 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5wqjd" podStartSLOduration=3.141979851 podStartE2EDuration="22.533796866s" podCreationTimestamp="2025-09-09 00:20:00 +0000 UTC" firstStartedPulling="2025-09-09 00:20:01.820366283 +0000 UTC m=+7.247276269" lastFinishedPulling="2025-09-09 00:20:21.212183298 +0000 UTC m=+26.639093284" observedRunningTime="2025-09-09 00:20:22.533608757 +0000 UTC m=+27.960518743" watchObservedRunningTime="2025-09-09 00:20:22.533796866 +0000 UTC m=+27.960706882" Sep 9 00:20:22.539585 containerd[1603]: time="2025-09-09T00:20:22.539522134Z" level=info msg="Container 36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:22.553373 containerd[1603]: time="2025-09-09T00:20:22.553314044Z" level=info msg="CreateContainer within sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\"" Sep 9 00:20:22.554021 containerd[1603]: time="2025-09-09T00:20:22.553970109Z" level=info msg="StartContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\"" Sep 9 00:20:22.555287 containerd[1603]: time="2025-09-09T00:20:22.555247934Z" level=info msg="connecting to shim 36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a" address="unix:///run/containerd/s/8c6e5b0943dd6f4b8ad40690f04e31904bfbdb6b8ba2464baa30bbd44bbbd549" protocol=ttrpc version=3 Sep 9 00:20:22.574913 systemd[1]: Started cri-containerd-36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a.scope - libcontainer container 36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a. Sep 9 00:20:22.624199 containerd[1603]: time="2025-09-09T00:20:22.624053722Z" level=info msg="StartContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" returns successfully" Sep 9 00:20:22.761329 containerd[1603]: time="2025-09-09T00:20:22.761259816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" id:\"763ddf9c5170dc42c064f8dfab2eea8f11542e2358b3ea1616e9f4a21a2135be\" pid:3458 exited_at:{seconds:1757377222 nanos:738783390}" Sep 9 00:20:22.799314 kubelet[2777]: I0909 00:20:22.798572 2777 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:20:22.940850 systemd[1]: Created slice kubepods-burstable-podd9e1b49d_5c72_42a5_986a_71952a69e796.slice - libcontainer container kubepods-burstable-podd9e1b49d_5c72_42a5_986a_71952a69e796.slice. Sep 9 00:20:22.951338 systemd[1]: Created slice kubepods-burstable-podd0e99087_7030_43b1_b307_6a2684d3a361.slice - libcontainer container kubepods-burstable-podd0e99087_7030_43b1_b307_6a2684d3a361.slice. Sep 9 00:20:23.044739 kubelet[2777]: I0909 00:20:23.044629 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e99087-7030-43b1-b307-6a2684d3a361-config-volume\") pod \"coredns-674b8bbfcf-5shfr\" (UID: \"d0e99087-7030-43b1-b307-6a2684d3a361\") " pod="kube-system/coredns-674b8bbfcf-5shfr" Sep 9 00:20:23.044739 kubelet[2777]: I0909 00:20:23.044677 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k24ss\" (UniqueName: \"kubernetes.io/projected/d0e99087-7030-43b1-b307-6a2684d3a361-kube-api-access-k24ss\") pod \"coredns-674b8bbfcf-5shfr\" (UID: \"d0e99087-7030-43b1-b307-6a2684d3a361\") " pod="kube-system/coredns-674b8bbfcf-5shfr" Sep 9 00:20:23.044739 kubelet[2777]: I0909 00:20:23.044698 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9e1b49d-5c72-42a5-986a-71952a69e796-config-volume\") pod \"coredns-674b8bbfcf-s6c5n\" (UID: \"d9e1b49d-5c72-42a5-986a-71952a69e796\") " pod="kube-system/coredns-674b8bbfcf-s6c5n" Sep 9 00:20:23.044961 kubelet[2777]: I0909 00:20:23.044747 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7tp\" (UniqueName: \"kubernetes.io/projected/d9e1b49d-5c72-42a5-986a-71952a69e796-kube-api-access-7j7tp\") pod \"coredns-674b8bbfcf-s6c5n\" (UID: \"d9e1b49d-5c72-42a5-986a-71952a69e796\") " pod="kube-system/coredns-674b8bbfcf-s6c5n" Sep 9 00:20:23.247286 kubelet[2777]: E0909 00:20:23.247206 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:23.248501 containerd[1603]: time="2025-09-09T00:20:23.248399123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s6c5n,Uid:d9e1b49d-5c72-42a5-986a-71952a69e796,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:23.255013 kubelet[2777]: E0909 00:20:23.254988 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:23.255387 kubelet[2777]: E0909 00:20:23.255274 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:23.258742 kubelet[2777]: E0909 00:20:23.258559 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:23.261454 containerd[1603]: time="2025-09-09T00:20:23.260593053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5shfr,Uid:d0e99087-7030-43b1-b307-6a2684d3a361,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:23.287894 kubelet[2777]: I0909 00:20:23.287719 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22t8g" podStartSLOduration=6.698345855 podStartE2EDuration="23.287670007s" podCreationTimestamp="2025-09-09 00:20:00 +0000 UTC" firstStartedPulling="2025-09-09 00:20:01.815373339 +0000 UTC m=+7.242283326" lastFinishedPulling="2025-09-09 00:20:18.404697492 +0000 UTC m=+23.831607478" observedRunningTime="2025-09-09 00:20:23.284038389 +0000 UTC m=+28.710948375" watchObservedRunningTime="2025-09-09 00:20:23.287670007 +0000 UTC m=+28.714580013" Sep 9 00:20:24.257135 kubelet[2777]: E0909 00:20:24.257088 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:25.050473 systemd-networkd[1496]: cilium_host: Link UP Sep 9 00:20:25.050673 systemd-networkd[1496]: cilium_net: Link UP Sep 9 00:20:25.050904 systemd-networkd[1496]: cilium_host: Gained carrier Sep 9 00:20:25.051138 systemd-networkd[1496]: cilium_net: Gained carrier Sep 9 00:20:25.162813 systemd-networkd[1496]: cilium_vxlan: Link UP Sep 9 00:20:25.162825 systemd-networkd[1496]: cilium_vxlan: Gained carrier Sep 9 00:20:25.208949 systemd-networkd[1496]: cilium_host: Gained IPv6LL Sep 9 00:20:25.260634 kubelet[2777]: E0909 00:20:25.260583 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:25.328988 systemd-networkd[1496]: cilium_net: Gained IPv6LL Sep 9 00:20:25.394787 kernel: NET: Registered PF_ALG protocol family Sep 9 00:20:25.575951 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Sep 9 00:20:25.637700 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:25.639534 sshd-session[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:25.644441 systemd-logind[1576]: New session 8 of user core. Sep 9 00:20:25.654885 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:20:25.798627 sshd[3688]: Connection closed by 10.0.0.1 port 60594 Sep 9 00:20:25.798979 sshd-session[3669]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:25.804093 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:60594.service: Deactivated successfully. Sep 9 00:20:25.806695 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:20:25.807526 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:20:25.809439 systemd-logind[1576]: Removed session 8. Sep 9 00:20:26.111979 systemd-networkd[1496]: lxc_health: Link UP Sep 9 00:20:26.113142 systemd-networkd[1496]: lxc_health: Gained carrier Sep 9 00:20:26.297404 systemd-networkd[1496]: lxc0dc7ab428377: Link UP Sep 9 00:20:26.299783 kernel: eth0: renamed from tmp2a1a4 Sep 9 00:20:26.301951 systemd-networkd[1496]: lxc0dc7ab428377: Gained carrier Sep 9 00:20:26.326952 systemd-networkd[1496]: lxc77300c827e8d: Link UP Sep 9 00:20:26.329129 kernel: eth0: renamed from tmp56fd8 Sep 9 00:20:26.329299 systemd-networkd[1496]: lxc77300c827e8d: Gained carrier Sep 9 00:20:26.664994 systemd-networkd[1496]: cilium_vxlan: Gained IPv6LL Sep 9 00:20:26.785801 kubelet[2777]: E0909 00:20:26.784811 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:27.240971 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 9 00:20:27.583966 kubelet[2777]: E0909 00:20:27.583816 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:27.752966 systemd-networkd[1496]: lxc0dc7ab428377: Gained IPv6LL Sep 9 00:20:28.138006 systemd-networkd[1496]: lxc77300c827e8d: Gained IPv6LL Sep 9 00:20:28.266926 kubelet[2777]: E0909 00:20:28.266881 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:29.844082 containerd[1603]: time="2025-09-09T00:20:29.844028566Z" level=info msg="connecting to shim 2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba" address="unix:///run/containerd/s/9e070b5d7bb1eb49f444ae84f1574162229f830193bd48505ef78bb39464fdaa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:20:29.873122 containerd[1603]: time="2025-09-09T00:20:29.873067857Z" level=info msg="connecting to shim 56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752" address="unix:///run/containerd/s/46c80b8034379f9c9705099a2e168a381693dac4f33454bac7004fbe0ec388c2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:20:29.892059 systemd[1]: Started cri-containerd-2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba.scope - libcontainer container 2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba. Sep 9 00:20:29.916915 systemd[1]: Started cri-containerd-56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752.scope - libcontainer container 56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752. Sep 9 00:20:29.922159 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:29.934571 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:29.960694 containerd[1603]: time="2025-09-09T00:20:29.960628304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s6c5n,Uid:d9e1b49d-5c72-42a5-986a-71952a69e796,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba\"" Sep 9 00:20:29.961439 kubelet[2777]: E0909 00:20:29.961394 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:29.967157 containerd[1603]: time="2025-09-09T00:20:29.967109358Z" level=info msg="CreateContainer within sandbox \"2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:29.975517 containerd[1603]: time="2025-09-09T00:20:29.975471290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5shfr,Uid:d0e99087-7030-43b1-b307-6a2684d3a361,Namespace:kube-system,Attempt:0,} returns sandbox id \"56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752\"" Sep 9 00:20:29.976920 kubelet[2777]: E0909 00:20:29.976889 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:29.981970 containerd[1603]: time="2025-09-09T00:20:29.981862157Z" level=info msg="Container e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:29.982517 containerd[1603]: time="2025-09-09T00:20:29.982493642Z" level=info msg="CreateContainer within sandbox \"56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:29.993506 containerd[1603]: time="2025-09-09T00:20:29.993453466Z" level=info msg="CreateContainer within sandbox \"2a1a406dbca2bd7b2407f79abb403a547b43acb5ae913caf21bc87b1865755ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f\"" Sep 9 00:20:29.994005 containerd[1603]: time="2025-09-09T00:20:29.993949299Z" level=info msg="StartContainer for \"e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f\"" Sep 9 00:20:29.994820 containerd[1603]: time="2025-09-09T00:20:29.994780755Z" level=info msg="connecting to shim e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f" address="unix:///run/containerd/s/9e070b5d7bb1eb49f444ae84f1574162229f830193bd48505ef78bb39464fdaa" protocol=ttrpc version=3 Sep 9 00:20:30.002784 containerd[1603]: time="2025-09-09T00:20:30.002713561Z" level=info msg="Container f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:20:30.014431 containerd[1603]: time="2025-09-09T00:20:30.014377627Z" level=info msg="CreateContainer within sandbox \"56fd888f4342fd207f80ffedb2b835e21319ad58618c90e2f7b8709e14067752\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118\"" Sep 9 00:20:30.015248 containerd[1603]: time="2025-09-09T00:20:30.015189698Z" level=info msg="StartContainer for \"f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118\"" Sep 9 00:20:30.016272 containerd[1603]: time="2025-09-09T00:20:30.016227609Z" level=info msg="connecting to shim f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118" address="unix:///run/containerd/s/46c80b8034379f9c9705099a2e168a381693dac4f33454bac7004fbe0ec388c2" protocol=ttrpc version=3 Sep 9 00:20:30.035904 systemd[1]: Started cri-containerd-e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f.scope - libcontainer container e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f. Sep 9 00:20:30.039336 systemd[1]: Started cri-containerd-f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118.scope - libcontainer container f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118. Sep 9 00:20:30.302029 containerd[1603]: time="2025-09-09T00:20:30.301954337Z" level=info msg="StartContainer for \"f8a1669e84c835b02cc4b532c86453a55683a8907b16e3f8edc295de3d840118\" returns successfully" Sep 9 00:20:30.302425 containerd[1603]: time="2025-09-09T00:20:30.302405926Z" level=info msg="StartContainer for \"e60dfaa64ff1779359db79fe4741a1558fe96c9c80859ca6bd199680cd1f434f\" returns successfully" Sep 9 00:20:30.313423 kubelet[2777]: E0909 00:20:30.313368 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:30.335572 kubelet[2777]: I0909 00:20:30.335476 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s6c5n" podStartSLOduration=30.335453616 podStartE2EDuration="30.335453616s" podCreationTimestamp="2025-09-09 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:30.335180106 +0000 UTC m=+35.762090092" watchObservedRunningTime="2025-09-09 00:20:30.335453616 +0000 UTC m=+35.762363602" Sep 9 00:20:30.813668 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:43060.service - OpenSSH per-connection server daemon (10.0.0.1:43060). Sep 9 00:20:30.841691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18024024.mount: Deactivated successfully. Sep 9 00:20:30.859503 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 43060 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:30.861025 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:30.865665 systemd-logind[1576]: New session 9 of user core. Sep 9 00:20:30.876937 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:20:31.005393 sshd[4125]: Connection closed by 10.0.0.1 port 43060 Sep 9 00:20:31.005713 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:31.009101 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:43060.service: Deactivated successfully. Sep 9 00:20:31.011662 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:20:31.012451 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:20:31.015239 systemd-logind[1576]: Removed session 9. Sep 9 00:20:31.314950 kubelet[2777]: E0909 00:20:31.314664 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:31.314950 kubelet[2777]: E0909 00:20:31.314664 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:31.330737 kubelet[2777]: I0909 00:20:31.330665 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5shfr" podStartSLOduration=30.330645329 podStartE2EDuration="30.330645329s" podCreationTimestamp="2025-09-09 00:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:31.329717721 +0000 UTC m=+36.756627717" watchObservedRunningTime="2025-09-09 00:20:31.330645329 +0000 UTC m=+36.757555305" Sep 9 00:20:32.316941 kubelet[2777]: E0909 00:20:32.316892 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:36.024451 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:43068.service - OpenSSH per-connection server daemon (10.0.0.1:43068). Sep 9 00:20:36.080849 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 43068 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:36.082444 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:36.086828 systemd-logind[1576]: New session 10 of user core. Sep 9 00:20:36.096907 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:20:36.228596 sshd[4149]: Connection closed by 10.0.0.1 port 43068 Sep 9 00:20:36.228948 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:36.233902 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:43068.service: Deactivated successfully. Sep 9 00:20:36.236198 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:20:36.236963 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:20:36.238291 systemd-logind[1576]: Removed session 10. Sep 9 00:20:41.251321 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Sep 9 00:20:41.308697 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:41.310686 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:41.315826 kubelet[2777]: E0909 00:20:41.315670 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:41.317888 systemd-logind[1576]: New session 11 of user core. Sep 9 00:20:41.323141 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:20:41.335712 kubelet[2777]: E0909 00:20:41.335645 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:41.454387 sshd[4165]: Connection closed by 10.0.0.1 port 53966 Sep 9 00:20:41.454821 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:41.460445 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:53966.service: Deactivated successfully. Sep 9 00:20:41.463300 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:20:41.464324 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:20:41.465711 systemd-logind[1576]: Removed session 11. Sep 9 00:20:46.470593 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Sep 9 00:20:46.534103 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:46.535971 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.540799 systemd-logind[1576]: New session 12 of user core. Sep 9 00:20:46.551898 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:20:46.704869 sshd[4187]: Connection closed by 10.0.0.1 port 53982 Sep 9 00:20:46.705231 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:46.721176 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:53982.service: Deactivated successfully. Sep 9 00:20:46.723600 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:20:46.724646 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:20:46.728533 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Sep 9 00:20:46.729510 systemd-logind[1576]: Removed session 12. Sep 9 00:20:46.779401 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:46.781034 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:46.785846 systemd-logind[1576]: New session 13 of user core. Sep 9 00:20:46.795910 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:20:47.032086 sshd[4204]: Connection closed by 10.0.0.1 port 53996 Sep 9 00:20:47.032291 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:47.050129 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:53996.service: Deactivated successfully. Sep 9 00:20:47.052799 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:20:47.054800 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:20:47.057787 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:54004.service - OpenSSH per-connection server daemon (10.0.0.1:54004). Sep 9 00:20:47.058562 systemd-logind[1576]: Removed session 13. Sep 9 00:20:47.115282 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 54004 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:47.117094 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:47.122480 systemd-logind[1576]: New session 14 of user core. Sep 9 00:20:47.133904 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:20:47.309726 sshd[4218]: Connection closed by 10.0.0.1 port 54004 Sep 9 00:20:47.310053 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:47.315016 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:54004.service: Deactivated successfully. Sep 9 00:20:47.317681 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:20:47.318784 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:20:47.320480 systemd-logind[1576]: Removed session 14. Sep 9 00:20:52.328118 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:47638.service - OpenSSH per-connection server daemon (10.0.0.1:47638). Sep 9 00:20:52.367614 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 47638 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:52.369544 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:52.374088 systemd-logind[1576]: New session 15 of user core. Sep 9 00:20:52.394888 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:20:52.505960 sshd[4236]: Connection closed by 10.0.0.1 port 47638 Sep 9 00:20:52.506327 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:52.511137 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:47638.service: Deactivated successfully. Sep 9 00:20:52.513461 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:20:52.514228 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:20:52.515635 systemd-logind[1576]: Removed session 15. Sep 9 00:20:57.523938 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:47648.service - OpenSSH per-connection server daemon (10.0.0.1:47648). Sep 9 00:20:57.577459 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 47648 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:20:57.579332 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:57.585283 systemd-logind[1576]: New session 16 of user core. Sep 9 00:20:57.594917 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:20:57.729292 sshd[4253]: Connection closed by 10.0.0.1 port 47648 Sep 9 00:20:57.729636 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:57.734528 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:47648.service: Deactivated successfully. Sep 9 00:20:57.737465 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:20:57.738411 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:20:57.740511 systemd-logind[1576]: Removed session 16. Sep 9 00:21:02.742830 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:46170.service - OpenSSH per-connection server daemon (10.0.0.1:46170). Sep 9 00:21:02.795475 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 46170 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:02.797285 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:02.802647 systemd-logind[1576]: New session 17 of user core. Sep 9 00:21:02.811946 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:21:02.925303 sshd[4271]: Connection closed by 10.0.0.1 port 46170 Sep 9 00:21:02.925832 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:02.941098 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:46170.service: Deactivated successfully. Sep 9 00:21:02.943306 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:21:02.944546 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:21:02.948176 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:46186.service - OpenSSH per-connection server daemon (10.0.0.1:46186). Sep 9 00:21:02.949153 systemd-logind[1576]: Removed session 17. Sep 9 00:21:03.003761 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 46186 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:03.006293 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:03.012866 systemd-logind[1576]: New session 18 of user core. Sep 9 00:21:03.031303 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:21:03.383288 sshd[4287]: Connection closed by 10.0.0.1 port 46186 Sep 9 00:21:03.383597 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:03.399540 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:46186.service: Deactivated successfully. Sep 9 00:21:03.401712 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:21:03.402510 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:21:03.406024 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:46196.service - OpenSSH per-connection server daemon (10.0.0.1:46196). Sep 9 00:21:03.406658 systemd-logind[1576]: Removed session 18. Sep 9 00:21:03.461336 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 46196 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:03.462978 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:03.467930 systemd-logind[1576]: New session 19 of user core. Sep 9 00:21:03.477915 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:21:04.585869 sshd[4301]: Connection closed by 10.0.0.1 port 46196 Sep 9 00:21:04.587202 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:04.599975 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:46196.service: Deactivated successfully. Sep 9 00:21:04.602921 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:21:04.605039 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:21:04.608145 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:46204.service - OpenSSH per-connection server daemon (10.0.0.1:46204). Sep 9 00:21:04.609435 systemd-logind[1576]: Removed session 19. Sep 9 00:21:04.665537 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 46204 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:04.667454 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:04.673243 systemd-logind[1576]: New session 20 of user core. Sep 9 00:21:04.680954 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:21:05.120489 sshd[4322]: Connection closed by 10.0.0.1 port 46204 Sep 9 00:21:05.121192 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:05.137719 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:46204.service: Deactivated successfully. Sep 9 00:21:05.139991 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:21:05.140934 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:21:05.144405 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:46220.service - OpenSSH per-connection server daemon (10.0.0.1:46220). Sep 9 00:21:05.145380 systemd-logind[1576]: Removed session 20. Sep 9 00:21:05.201376 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 46220 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:05.203216 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:05.208270 systemd-logind[1576]: New session 21 of user core. Sep 9 00:21:05.223885 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:21:05.418562 sshd[4335]: Connection closed by 10.0.0.1 port 46220 Sep 9 00:21:05.418890 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:05.423242 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:46220.service: Deactivated successfully. Sep 9 00:21:05.425676 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:21:05.426569 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:21:05.428220 systemd-logind[1576]: Removed session 21. Sep 9 00:21:10.434372 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:33992.service - OpenSSH per-connection server daemon (10.0.0.1:33992). Sep 9 00:21:10.484277 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 33992 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:10.486350 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:10.492151 systemd-logind[1576]: New session 22 of user core. Sep 9 00:21:10.503004 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:21:10.639371 sshd[4350]: Connection closed by 10.0.0.1 port 33992 Sep 9 00:21:10.639275 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:10.646297 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:33992.service: Deactivated successfully. Sep 9 00:21:10.649334 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:21:10.651540 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:21:10.655875 systemd-logind[1576]: Removed session 22. Sep 9 00:21:13.688922 kubelet[2777]: E0909 00:21:13.688852 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:15.658672 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:34004.service - OpenSSH per-connection server daemon (10.0.0.1:34004). Sep 9 00:21:15.715553 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 34004 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:15.717350 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:15.722719 systemd-logind[1576]: New session 23 of user core. Sep 9 00:21:15.731035 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:21:15.851647 sshd[4367]: Connection closed by 10.0.0.1 port 34004 Sep 9 00:21:15.852089 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:15.856907 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:34004.service: Deactivated successfully. Sep 9 00:21:15.859062 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:21:15.860053 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:21:15.861512 systemd-logind[1576]: Removed session 23. Sep 9 00:21:18.688783 kubelet[2777]: E0909 00:21:18.688693 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:20.881324 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:40908.service - OpenSSH per-connection server daemon (10.0.0.1:40908). Sep 9 00:21:20.983119 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 40908 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:20.989020 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:21.002719 systemd-logind[1576]: New session 24 of user core. Sep 9 00:21:21.018077 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:21:21.229394 sshd[4382]: Connection closed by 10.0.0.1 port 40908 Sep 9 00:21:21.230177 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:21.240618 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:40908.service: Deactivated successfully. Sep 9 00:21:21.242958 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:21:21.243788 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:21:21.246997 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:40912.service - OpenSSH per-connection server daemon (10.0.0.1:40912). Sep 9 00:21:21.247970 systemd-logind[1576]: Removed session 24. Sep 9 00:21:21.338517 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 40912 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:21.341088 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:21.352619 systemd-logind[1576]: New session 25 of user core. Sep 9 00:21:21.367421 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:21:23.239441 containerd[1603]: time="2025-09-09T00:21:23.239272791Z" level=info msg="StopContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" with timeout 30 (s)" Sep 9 00:21:23.241306 containerd[1603]: time="2025-09-09T00:21:23.241266933Z" level=info msg="Stop container \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" with signal terminated" Sep 9 00:21:23.256218 systemd[1]: cri-containerd-f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e.scope: Deactivated successfully. Sep 9 00:21:23.260326 containerd[1603]: time="2025-09-09T00:21:23.260289608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" id:\"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" pid:3362 exited_at:{seconds:1757377283 nanos:258504773}" Sep 9 00:21:23.260532 containerd[1603]: time="2025-09-09T00:21:23.260494166Z" level=info msg="received exit event container_id:\"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" id:\"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" pid:3362 exited_at:{seconds:1757377283 nanos:258504773}" Sep 9 00:21:23.272591 containerd[1603]: time="2025-09-09T00:21:23.272523234Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:21:23.273160 containerd[1603]: time="2025-09-09T00:21:23.273117191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" id:\"41a2ebf3749c36a42ca931abe468313acb1b289297d928f232b932f7b51eebed\" pid:4425 exited_at:{seconds:1757377283 nanos:272862929}" Sep 9 00:21:23.276461 containerd[1603]: time="2025-09-09T00:21:23.276425677Z" level=info msg="StopContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" with timeout 2 (s)" Sep 9 00:21:23.276881 containerd[1603]: time="2025-09-09T00:21:23.276746025Z" level=info msg="Stop container \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" with signal terminated" Sep 9 00:21:23.288073 systemd-networkd[1496]: lxc_health: Link DOWN Sep 9 00:21:23.288083 systemd-networkd[1496]: lxc_health: Lost carrier Sep 9 00:21:23.288254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e-rootfs.mount: Deactivated successfully. Sep 9 00:21:23.299343 containerd[1603]: time="2025-09-09T00:21:23.299180110Z" level=info msg="StopContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" returns successfully" Sep 9 00:21:23.300556 containerd[1603]: time="2025-09-09T00:21:23.300520492Z" level=info msg="StopPodSandbox for \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\"" Sep 9 00:21:23.300636 containerd[1603]: time="2025-09-09T00:21:23.300612848Z" level=info msg="Container to stop \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.307624 systemd[1]: cri-containerd-36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a.scope: Deactivated successfully. Sep 9 00:21:23.308368 systemd[1]: cri-containerd-36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a.scope: Consumed 7.108s CPU time, 125.6M memory peak, 220K read from disk, 13.3M written to disk. Sep 9 00:21:23.308698 containerd[1603]: time="2025-09-09T00:21:23.308655694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" pid:3425 exited_at:{seconds:1757377283 nanos:308071967}" Sep 9 00:21:23.309613 containerd[1603]: time="2025-09-09T00:21:23.308745475Z" level=info msg="received exit event container_id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" id:\"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" pid:3425 exited_at:{seconds:1757377283 nanos:308071967}" Sep 9 00:21:23.309313 systemd[1]: cri-containerd-9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379.scope: Deactivated successfully. Sep 9 00:21:23.313430 containerd[1603]: time="2025-09-09T00:21:23.313377531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" id:\"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" pid:2957 exit_status:137 exited_at:{seconds:1757377283 nanos:312997480}" Sep 9 00:21:23.335539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a-rootfs.mount: Deactivated successfully. Sep 9 00:21:23.349408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379-rootfs.mount: Deactivated successfully. Sep 9 00:21:23.371559 containerd[1603]: time="2025-09-09T00:21:23.371492450Z" level=info msg="shim disconnected" id=9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379 namespace=k8s.io Sep 9 00:21:23.371559 containerd[1603]: time="2025-09-09T00:21:23.371545030Z" level=warning msg="cleaning up after shim disconnected" id=9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379 namespace=k8s.io Sep 9 00:21:23.393037 containerd[1603]: time="2025-09-09T00:21:23.371554458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:23.393235 containerd[1603]: time="2025-09-09T00:21:23.371644328Z" level=info msg="StopContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" returns successfully" Sep 9 00:21:23.393741 containerd[1603]: time="2025-09-09T00:21:23.393705437Z" level=info msg="StopPodSandbox for \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\"" Sep 9 00:21:23.393915 containerd[1603]: time="2025-09-09T00:21:23.393870439Z" level=info msg="Container to stop \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.393915 containerd[1603]: time="2025-09-09T00:21:23.393898923Z" level=info msg="Container to stop \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.393915 containerd[1603]: time="2025-09-09T00:21:23.393911247Z" level=info msg="Container to stop \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.394098 containerd[1603]: time="2025-09-09T00:21:23.393926024Z" level=info msg="Container to stop \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.394098 containerd[1603]: time="2025-09-09T00:21:23.393941595Z" level=info msg="Container to stop \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:23.401584 systemd[1]: cri-containerd-96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe.scope: Deactivated successfully. Sep 9 00:21:23.421873 containerd[1603]: time="2025-09-09T00:21:23.421451659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" id:\"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" pid:2964 exit_status:137 exited_at:{seconds:1757377283 nanos:402663769}" Sep 9 00:21:23.423648 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379-shm.mount: Deactivated successfully. Sep 9 00:21:23.433257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe-rootfs.mount: Deactivated successfully. Sep 9 00:21:23.435121 containerd[1603]: time="2025-09-09T00:21:23.434979920Z" level=info msg="received exit event sandbox_id:\"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" exit_status:137 exited_at:{seconds:1757377283 nanos:312997480}" Sep 9 00:21:23.442096 containerd[1603]: time="2025-09-09T00:21:23.441886612Z" level=info msg="TearDown network for sandbox \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" successfully" Sep 9 00:21:23.442096 containerd[1603]: time="2025-09-09T00:21:23.441932799Z" level=info msg="StopPodSandbox for \"9529518d02187e473c17f6eb8b8b110fa64eb91794bf493e4c1eb24ebd940379\" returns successfully" Sep 9 00:21:23.444606 containerd[1603]: time="2025-09-09T00:21:23.444577366Z" level=info msg="received exit event sandbox_id:\"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" exit_status:137 exited_at:{seconds:1757377283 nanos:402663769}" Sep 9 00:21:23.446323 containerd[1603]: time="2025-09-09T00:21:23.446278463Z" level=info msg="shim disconnected" id=96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe namespace=k8s.io Sep 9 00:21:23.446323 containerd[1603]: time="2025-09-09T00:21:23.446311436Z" level=warning msg="cleaning up after shim disconnected" id=96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe namespace=k8s.io Sep 9 00:21:23.447921 containerd[1603]: time="2025-09-09T00:21:23.446325863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:23.452744 containerd[1603]: time="2025-09-09T00:21:23.452286239Z" level=info msg="TearDown network for sandbox \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" successfully" Sep 9 00:21:23.452744 containerd[1603]: time="2025-09-09T00:21:23.452333258Z" level=info msg="StopPodSandbox for \"96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe\" returns successfully" Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552641 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbzbv\" (UniqueName: \"kubernetes.io/projected/015e2446-31b6-4421-ba58-c4443ade1e79-kube-api-access-qbzbv\") pod \"015e2446-31b6-4421-ba58-c4443ade1e79\" (UID: \"015e2446-31b6-4421-ba58-c4443ade1e79\") " Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552690 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-hubble-tls\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552710 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-bpf-maps\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552728 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-xtables-lock\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552776 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015e2446-31b6-4421-ba58-c4443ade1e79-cilium-config-path\") pod \"015e2446-31b6-4421-ba58-c4443ade1e79\" (UID: \"015e2446-31b6-4421-ba58-c4443ade1e79\") " Sep 9 00:21:23.552879 kubelet[2777]: I0909 00:21:23.552802 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-config-path\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.553655 kubelet[2777]: I0909 00:21:23.552821 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cni-path\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.553655 kubelet[2777]: I0909 00:21:23.552828 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.553655 kubelet[2777]: I0909 00:21:23.552850 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb495432-3f3e-471a-aee5-8891ac5e77bb-clustermesh-secrets\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.553962 kubelet[2777]: I0909 00:21:23.553926 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-net\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554084 kubelet[2777]: I0909 00:21:23.554047 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-kernel\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554084 kubelet[2777]: I0909 00:21:23.554087 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-hostproc\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554111 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-cgroup\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554135 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-run\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554173 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgk7\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-kube-api-access-bmgk7\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554193 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-lib-modules\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554234 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-etc-cni-netd\") pod \"bb495432-3f3e-471a-aee5-8891ac5e77bb\" (UID: \"bb495432-3f3e-471a-aee5-8891ac5e77bb\") " Sep 9 00:21:23.554373 kubelet[2777]: I0909 00:21:23.554308 2777 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.554592 kubelet[2777]: I0909 00:21:23.554360 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.560024 kubelet[2777]: I0909 00:21:23.559978 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.561818 kubelet[2777]: I0909 00:21:23.561793 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.561942 kubelet[2777]: I0909 00:21:23.561891 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015e2446-31b6-4421-ba58-c4443ade1e79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "015e2446-31b6-4421-ba58-c4443ade1e79" (UID: "015e2446-31b6-4421-ba58-c4443ade1e79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:21:23.562094 kubelet[2777]: I0909 00:21:23.561924 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.562188 kubelet[2777]: I0909 00:21:23.561942 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.562480 kubelet[2777]: I0909 00:21:23.561957 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.562480 kubelet[2777]: I0909 00:21:23.561982 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.562480 kubelet[2777]: I0909 00:21:23.561997 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.562480 kubelet[2777]: I0909 00:21:23.562084 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:21:23.562480 kubelet[2777]: I0909 00:21:23.562117 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:23.563643 kubelet[2777]: I0909 00:21:23.563610 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:23.564032 kubelet[2777]: I0909 00:21:23.563986 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015e2446-31b6-4421-ba58-c4443ade1e79-kube-api-access-qbzbv" (OuterVolumeSpecName: "kube-api-access-qbzbv") pod "015e2446-31b6-4421-ba58-c4443ade1e79" (UID: "015e2446-31b6-4421-ba58-c4443ade1e79"). InnerVolumeSpecName "kube-api-access-qbzbv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:23.564107 kubelet[2777]: I0909 00:21:23.564091 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb495432-3f3e-471a-aee5-8891ac5e77bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:21:23.565684 kubelet[2777]: I0909 00:21:23.565645 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-kube-api-access-bmgk7" (OuterVolumeSpecName: "kube-api-access-bmgk7") pod "bb495432-3f3e-471a-aee5-8891ac5e77bb" (UID: "bb495432-3f3e-471a-aee5-8891ac5e77bb"). InnerVolumeSpecName "kube-api-access-bmgk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:23.655449 kubelet[2777]: I0909 00:21:23.655389 2777 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655449 kubelet[2777]: I0909 00:21:23.655427 2777 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655449 kubelet[2777]: I0909 00:21:23.655440 2777 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655449 kubelet[2777]: I0909 00:21:23.655452 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655449 kubelet[2777]: I0909 00:21:23.655461 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655471 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bmgk7\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-kube-api-access-bmgk7\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655482 2777 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655491 2777 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655500 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qbzbv\" (UniqueName: \"kubernetes.io/projected/015e2446-31b6-4421-ba58-c4443ade1e79-kube-api-access-qbzbv\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655510 2777 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb495432-3f3e-471a-aee5-8891ac5e77bb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655521 2777 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655533 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015e2446-31b6-4421-ba58-c4443ade1e79-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655735 kubelet[2777]: I0909 00:21:23.655543 2777 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb495432-3f3e-471a-aee5-8891ac5e77bb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655973 kubelet[2777]: I0909 00:21:23.655553 2777 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb495432-3f3e-471a-aee5-8891ac5e77bb-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:23.655973 kubelet[2777]: I0909 00:21:23.655562 2777 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb495432-3f3e-471a-aee5-8891ac5e77bb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:24.287371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96f217973a605f22e51c595d9a4c1fb81fbf6371d30c55fbc03320ab37016bbe-shm.mount: Deactivated successfully. Sep 9 00:21:24.287601 systemd[1]: var-lib-kubelet-pods-bb495432\x2d3f3e\x2d471a\x2daee5\x2d8891ac5e77bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmgk7.mount: Deactivated successfully. Sep 9 00:21:24.287709 systemd[1]: var-lib-kubelet-pods-015e2446\x2d31b6\x2d4421\x2dba58\x2dc4443ade1e79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbzbv.mount: Deactivated successfully. Sep 9 00:21:24.287841 systemd[1]: var-lib-kubelet-pods-bb495432\x2d3f3e\x2d471a\x2daee5\x2d8891ac5e77bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:21:24.287952 systemd[1]: var-lib-kubelet-pods-bb495432\x2d3f3e\x2d471a\x2daee5\x2d8891ac5e77bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:21:24.456491 kubelet[2777]: I0909 00:21:24.456447 2777 scope.go:117] "RemoveContainer" containerID="f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e" Sep 9 00:21:24.459966 containerd[1603]: time="2025-09-09T00:21:24.459910555Z" level=info msg="RemoveContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\"" Sep 9 00:21:24.463359 systemd[1]: Removed slice kubepods-besteffort-pod015e2446_31b6_4421_ba58_c4443ade1e79.slice - libcontainer container kubepods-besteffort-pod015e2446_31b6_4421_ba58_c4443ade1e79.slice. Sep 9 00:21:24.469224 containerd[1603]: time="2025-09-09T00:21:24.468721985Z" level=info msg="RemoveContainer for \"f8751e2458c7439e924cff7aed9341b40f468debc4aa8c13ec67390ccbcd6d7e\" returns successfully" Sep 9 00:21:24.469597 kubelet[2777]: I0909 00:21:24.469575 2777 scope.go:117] "RemoveContainer" containerID="36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a" Sep 9 00:21:24.469736 systemd[1]: Removed slice kubepods-burstable-podbb495432_3f3e_471a_aee5_8891ac5e77bb.slice - libcontainer container kubepods-burstable-podbb495432_3f3e_471a_aee5_8891ac5e77bb.slice. Sep 9 00:21:24.469853 systemd[1]: kubepods-burstable-podbb495432_3f3e_471a_aee5_8891ac5e77bb.slice: Consumed 7.228s CPU time, 126M memory peak, 232K read from disk, 15.6M written to disk. Sep 9 00:21:24.471475 containerd[1603]: time="2025-09-09T00:21:24.471435191Z" level=info msg="RemoveContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\"" Sep 9 00:21:24.476559 containerd[1603]: time="2025-09-09T00:21:24.476531976Z" level=info msg="RemoveContainer for \"36dfb7d0c7b1d1e0ebafbc20667776469c010b5b366e509b3dd21d3dcd50f39a\" returns successfully" Sep 9 00:21:24.476736 kubelet[2777]: I0909 00:21:24.476706 2777 scope.go:117] "RemoveContainer" containerID="348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd" Sep 9 00:21:24.477814 containerd[1603]: time="2025-09-09T00:21:24.477785504Z" level=info msg="RemoveContainer for \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\"" Sep 9 00:21:24.484674 containerd[1603]: time="2025-09-09T00:21:24.484622460Z" level=info msg="RemoveContainer for \"348c309a00e2d62fc8f5136ca2fc164d05c2dc29374b849b22536c8d4f8ae9fd\" returns successfully" Sep 9 00:21:24.484963 kubelet[2777]: I0909 00:21:24.484941 2777 scope.go:117] "RemoveContainer" containerID="ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52" Sep 9 00:21:24.487634 containerd[1603]: time="2025-09-09T00:21:24.487606839Z" level=info msg="RemoveContainer for \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\"" Sep 9 00:21:24.493721 containerd[1603]: time="2025-09-09T00:21:24.493687652Z" level=info msg="RemoveContainer for \"ddc570c4e25a8daacfbfe5bbac2a261e4361d5cd7144b94876f86a4c8392af52\" returns successfully" Sep 9 00:21:24.493925 kubelet[2777]: I0909 00:21:24.493888 2777 scope.go:117] "RemoveContainer" containerID="45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b" Sep 9 00:21:24.495830 containerd[1603]: time="2025-09-09T00:21:24.495803725Z" level=info msg="RemoveContainer for \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\"" Sep 9 00:21:24.499977 containerd[1603]: time="2025-09-09T00:21:24.499947473Z" level=info msg="RemoveContainer for \"45386c1653ae1b4a0617c0660100c5218c921b659208dfcfb80de3305563954b\" returns successfully" Sep 9 00:21:24.500130 kubelet[2777]: I0909 00:21:24.500095 2777 scope.go:117] "RemoveContainer" containerID="29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b" Sep 9 00:21:24.501549 containerd[1603]: time="2025-09-09T00:21:24.501505598Z" level=info msg="RemoveContainer for \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\"" Sep 9 00:21:24.505486 containerd[1603]: time="2025-09-09T00:21:24.505444559Z" level=info msg="RemoveContainer for \"29e1992be45787cc3da36b86ca0732df22d06947a78b17589fe9a67a8ddc261b\" returns successfully" Sep 9 00:21:24.691102 kubelet[2777]: I0909 00:21:24.691048 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015e2446-31b6-4421-ba58-c4443ade1e79" path="/var/lib/kubelet/pods/015e2446-31b6-4421-ba58-c4443ade1e79/volumes" Sep 9 00:21:24.691623 kubelet[2777]: I0909 00:21:24.691608 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb495432-3f3e-471a-aee5-8891ac5e77bb" path="/var/lib/kubelet/pods/bb495432-3f3e-471a-aee5-8891ac5e77bb/volumes" Sep 9 00:21:24.784618 kubelet[2777]: E0909 00:21:24.784567 2777 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:21:25.166724 sshd[4397]: Connection closed by 10.0.0.1 port 40912 Sep 9 00:21:25.167968 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:25.186044 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:40912.service: Deactivated successfully. Sep 9 00:21:25.188391 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:21:25.189292 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:21:25.193259 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:40924.service - OpenSSH per-connection server daemon (10.0.0.1:40924). Sep 9 00:21:25.194097 systemd-logind[1576]: Removed session 25. Sep 9 00:21:25.253290 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 40924 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:25.255122 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:25.260344 systemd-logind[1576]: New session 26 of user core. Sep 9 00:21:25.271901 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:21:25.771852 sshd[4549]: Connection closed by 10.0.0.1 port 40924 Sep 9 00:21:25.770901 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:25.784862 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:40924.service: Deactivated successfully. Sep 9 00:21:25.790411 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:21:25.791590 systemd-logind[1576]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:21:25.800881 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:40938.service - OpenSSH per-connection server daemon (10.0.0.1:40938). Sep 9 00:21:25.802548 systemd-logind[1576]: Removed session 26. Sep 9 00:21:25.817996 systemd[1]: Created slice kubepods-burstable-podd1d57a93_f440_405b_a421_744fc4c540e0.slice - libcontainer container kubepods-burstable-podd1d57a93_f440_405b_a421_744fc4c540e0.slice. Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869200 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-xtables-lock\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869255 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-etc-cni-netd\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869293 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-bpf-maps\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869311 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-hostproc\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869350 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1d57a93-f440-405b-a421-744fc4c540e0-cilium-config-path\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.869468 kubelet[2777]: I0909 00:21:25.869396 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-lib-modules\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.870264 kubelet[2777]: I0909 00:21:25.869419 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-host-proc-sys-kernel\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.870264 kubelet[2777]: I0909 00:21:25.869451 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4ht\" (UniqueName: \"kubernetes.io/projected/d1d57a93-f440-405b-a421-744fc4c540e0-kube-api-access-8c4ht\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.871902 kubelet[2777]: I0909 00:21:25.871802 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-cni-path\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872036 kubelet[2777]: I0909 00:21:25.871972 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1d57a93-f440-405b-a421-744fc4c540e0-clustermesh-secrets\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872036 kubelet[2777]: I0909 00:21:25.871992 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-host-proc-sys-net\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872176 kubelet[2777]: I0909 00:21:25.872139 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d1d57a93-f440-405b-a421-744fc4c540e0-cilium-ipsec-secrets\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872305 kubelet[2777]: I0909 00:21:25.872289 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-cilium-cgroup\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872456 kubelet[2777]: I0909 00:21:25.872407 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1d57a93-f440-405b-a421-744fc4c540e0-hubble-tls\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.872456 kubelet[2777]: I0909 00:21:25.872427 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1d57a93-f440-405b-a421-744fc4c540e0-cilium-run\") pod \"cilium-nkk4x\" (UID: \"d1d57a93-f440-405b-a421-744fc4c540e0\") " pod="kube-system/cilium-nkk4x" Sep 9 00:21:25.885360 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:25.887740 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:25.893815 systemd-logind[1576]: New session 27 of user core. Sep 9 00:21:25.902914 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:21:25.954407 sshd[4564]: Connection closed by 10.0.0.1 port 40938 Sep 9 00:21:25.954772 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:25.974389 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:40938.service: Deactivated successfully. Sep 9 00:21:25.977946 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:21:26.000519 systemd-logind[1576]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:21:26.003823 systemd[1]: Started sshd@27-10.0.0.67:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). Sep 9 00:21:26.004837 systemd-logind[1576]: Removed session 27. Sep 9 00:21:26.064740 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:21:26.066556 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:26.071859 systemd-logind[1576]: New session 28 of user core. Sep 9 00:21:26.082941 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:21:26.123103 kubelet[2777]: E0909 00:21:26.123035 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:26.123764 containerd[1603]: time="2025-09-09T00:21:26.123706251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkk4x,Uid:d1d57a93-f440-405b-a421-744fc4c540e0,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:26.465673 kubelet[2777]: I0909 00:21:26.465585 2777 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:21:26Z","lastTransitionTime":"2025-09-09T00:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:21:26.489710 containerd[1603]: time="2025-09-09T00:21:26.489651641Z" level=info msg="connecting to shim d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:21:26.526159 systemd[1]: Started cri-containerd-d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0.scope - libcontainer container d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0. Sep 9 00:21:26.591117 containerd[1603]: time="2025-09-09T00:21:26.591068155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkk4x,Uid:d1d57a93-f440-405b-a421-744fc4c540e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\"" Sep 9 00:21:26.592119 kubelet[2777]: E0909 00:21:26.592082 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:26.754839 containerd[1603]: time="2025-09-09T00:21:26.754665938Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:21:26.948646 containerd[1603]: time="2025-09-09T00:21:26.948568086Z" level=info msg="Container 3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:27.177083 containerd[1603]: time="2025-09-09T00:21:27.176992791Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\"" Sep 9 00:21:27.177802 containerd[1603]: time="2025-09-09T00:21:27.177725529Z" level=info msg="StartContainer for \"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\"" Sep 9 00:21:27.179024 containerd[1603]: time="2025-09-09T00:21:27.178860060Z" level=info msg="connecting to shim 3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" protocol=ttrpc version=3 Sep 9 00:21:27.210059 systemd[1]: Started cri-containerd-3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11.scope - libcontainer container 3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11. Sep 9 00:21:27.274144 systemd[1]: cri-containerd-3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11.scope: Deactivated successfully. Sep 9 00:21:27.277090 containerd[1603]: time="2025-09-09T00:21:27.277050938Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\" id:\"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\" pid:4643 exited_at:{seconds:1757377287 nanos:276502930}" Sep 9 00:21:27.475392 containerd[1603]: time="2025-09-09T00:21:27.475234630Z" level=info msg="received exit event container_id:\"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\" id:\"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\" pid:4643 exited_at:{seconds:1757377287 nanos:276502930}" Sep 9 00:21:27.476997 containerd[1603]: time="2025-09-09T00:21:27.476946194Z" level=info msg="StartContainer for \"3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11\" returns successfully" Sep 9 00:21:27.502239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a7e95a11d4204ee22b498b78ec55247fcd545ca2bb969e07acfbc895517fa11-rootfs.mount: Deactivated successfully. Sep 9 00:21:28.483222 kubelet[2777]: E0909 00:21:28.483184 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:28.488206 containerd[1603]: time="2025-09-09T00:21:28.488145983Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:21:28.507373 containerd[1603]: time="2025-09-09T00:21:28.507328960Z" level=info msg="Container 58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:28.509689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245415332.mount: Deactivated successfully. Sep 9 00:21:28.517196 containerd[1603]: time="2025-09-09T00:21:28.517128287Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\"" Sep 9 00:21:28.518817 containerd[1603]: time="2025-09-09T00:21:28.517779251Z" level=info msg="StartContainer for \"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\"" Sep 9 00:21:28.518817 containerd[1603]: time="2025-09-09T00:21:28.518655481Z" level=info msg="connecting to shim 58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" protocol=ttrpc version=3 Sep 9 00:21:28.546933 systemd[1]: Started cri-containerd-58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6.scope - libcontainer container 58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6. Sep 9 00:21:28.582086 containerd[1603]: time="2025-09-09T00:21:28.581991308Z" level=info msg="StartContainer for \"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\" returns successfully" Sep 9 00:21:28.588799 systemd[1]: cri-containerd-58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6.scope: Deactivated successfully. Sep 9 00:21:28.590050 containerd[1603]: time="2025-09-09T00:21:28.590007597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\" id:\"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\" pid:4688 exited_at:{seconds:1757377288 nanos:589514573}" Sep 9 00:21:28.590140 containerd[1603]: time="2025-09-09T00:21:28.590121653Z" level=info msg="received exit event container_id:\"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\" id:\"58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6\" pid:4688 exited_at:{seconds:1757377288 nanos:589514573}" Sep 9 00:21:28.614102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58dbb372cdbf1bd4a9105576ea8c3de67ebba025c51a93f4a6bb14d6f3192ed6-rootfs.mount: Deactivated successfully. Sep 9 00:21:29.487553 kubelet[2777]: E0909 00:21:29.487498 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:29.500895 containerd[1603]: time="2025-09-09T00:21:29.500856258Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:21:29.570227 containerd[1603]: time="2025-09-09T00:21:29.570171697Z" level=info msg="Container 6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:29.632114 containerd[1603]: time="2025-09-09T00:21:29.632037665Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\"" Sep 9 00:21:29.633665 containerd[1603]: time="2025-09-09T00:21:29.632933522Z" level=info msg="StartContainer for \"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\"" Sep 9 00:21:29.634655 containerd[1603]: time="2025-09-09T00:21:29.634624906Z" level=info msg="connecting to shim 6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" protocol=ttrpc version=3 Sep 9 00:21:29.664951 systemd[1]: Started cri-containerd-6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c.scope - libcontainer container 6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c. Sep 9 00:21:29.709658 systemd[1]: cri-containerd-6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c.scope: Deactivated successfully. Sep 9 00:21:29.711605 containerd[1603]: time="2025-09-09T00:21:29.711575359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\" id:\"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\" pid:4734 exited_at:{seconds:1757377289 nanos:711336968}" Sep 9 00:21:29.785863 kubelet[2777]: E0909 00:21:29.785726 2777 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:21:29.901003 containerd[1603]: time="2025-09-09T00:21:29.900923065Z" level=info msg="received exit event container_id:\"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\" id:\"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\" pid:4734 exited_at:{seconds:1757377289 nanos:711336968}" Sep 9 00:21:29.906376 containerd[1603]: time="2025-09-09T00:21:29.906140880Z" level=info msg="StartContainer for \"6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c\" returns successfully" Sep 9 00:21:29.929692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d78ab0ea73e80d0697e35271b8f61cb26ad413549822c48f22110459273a69c-rootfs.mount: Deactivated successfully. Sep 9 00:21:30.496037 kubelet[2777]: E0909 00:21:30.495744 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:30.519363 containerd[1603]: time="2025-09-09T00:21:30.519293927Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:21:30.860094 containerd[1603]: time="2025-09-09T00:21:30.859401420Z" level=info msg="Container 2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:31.174420 containerd[1603]: time="2025-09-09T00:21:31.174336985Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\"" Sep 9 00:21:31.176999 containerd[1603]: time="2025-09-09T00:21:31.175170864Z" level=info msg="StartContainer for \"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\"" Sep 9 00:21:31.177270 containerd[1603]: time="2025-09-09T00:21:31.177243018Z" level=info msg="connecting to shim 2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" protocol=ttrpc version=3 Sep 9 00:21:31.214729 systemd[1]: Started cri-containerd-2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4.scope - libcontainer container 2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4. Sep 9 00:21:31.261935 systemd[1]: cri-containerd-2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4.scope: Deactivated successfully. Sep 9 00:21:31.263865 containerd[1603]: time="2025-09-09T00:21:31.263819442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\" id:\"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\" pid:4772 exited_at:{seconds:1757377291 nanos:263544772}" Sep 9 00:21:31.281676 containerd[1603]: time="2025-09-09T00:21:31.281381728Z" level=info msg="received exit event container_id:\"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\" id:\"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\" pid:4772 exited_at:{seconds:1757377291 nanos:263544772}" Sep 9 00:21:31.295087 containerd[1603]: time="2025-09-09T00:21:31.295025454Z" level=info msg="StartContainer for \"2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4\" returns successfully" Sep 9 00:21:31.320039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4e8de1c8974387984c1b03252d63f098adf468d88a28fc672b243bcd1b3cf4-rootfs.mount: Deactivated successfully. Sep 9 00:21:31.527105 kubelet[2777]: E0909 00:21:31.526271 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:31.653299 containerd[1603]: time="2025-09-09T00:21:31.653236064Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:21:31.818535 containerd[1603]: time="2025-09-09T00:21:31.818369146Z" level=info msg="Container e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:21:31.919515 containerd[1603]: time="2025-09-09T00:21:31.919452410Z" level=info msg="CreateContainer within sandbox \"d59ed54c831d6a65cafa5da36d8b180fdc605bbcf1296cc358255896f58593a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\"" Sep 9 00:21:31.920141 containerd[1603]: time="2025-09-09T00:21:31.920099555Z" level=info msg="StartContainer for \"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\"" Sep 9 00:21:31.921340 containerd[1603]: time="2025-09-09T00:21:31.921271765Z" level=info msg="connecting to shim e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51" address="unix:///run/containerd/s/37677fdd594d237cf11168ff9ce18a9744c56eefe5084461e7938d37f35add7c" protocol=ttrpc version=3 Sep 9 00:21:31.944962 systemd[1]: Started cri-containerd-e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51.scope - libcontainer container e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51. Sep 9 00:21:32.045245 containerd[1603]: time="2025-09-09T00:21:32.045197800Z" level=info msg="StartContainer for \"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" returns successfully" Sep 9 00:21:32.123308 containerd[1603]: time="2025-09-09T00:21:32.122839347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" id:\"7f9e5266e0656dc42fdd5b278bc986f709b03d4ab08a724d5c90b60911170047\" pid:4846 exited_at:{seconds:1757377292 nanos:122400376}" Sep 9 00:21:32.523819 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 00:21:32.533310 kubelet[2777]: E0909 00:21:32.533116 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:32.723714 kubelet[2777]: I0909 00:21:32.723636 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nkk4x" podStartSLOduration=7.723605515 podStartE2EDuration="7.723605515s" podCreationTimestamp="2025-09-09 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:32.723189567 +0000 UTC m=+98.150099573" watchObservedRunningTime="2025-09-09 00:21:32.723605515 +0000 UTC m=+98.150515501" Sep 9 00:21:33.535057 kubelet[2777]: E0909 00:21:33.535010 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:33.689396 kubelet[2777]: E0909 00:21:33.689302 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:33.689396 kubelet[2777]: E0909 00:21:33.689361 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:34.537479 kubelet[2777]: E0909 00:21:34.537423 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:34.941597 containerd[1603]: time="2025-09-09T00:21:34.941492514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" id:\"e32c0b98861ba02029ed665117f01f602bcb7c35181ed01badce3cd7dff77157\" pid:5012 exit_status:1 exited_at:{seconds:1757377294 nanos:940810984}" Sep 9 00:21:36.095899 systemd-networkd[1496]: lxc_health: Link UP Sep 9 00:21:36.096742 systemd-networkd[1496]: lxc_health: Gained carrier Sep 9 00:21:36.128721 kubelet[2777]: E0909 00:21:36.128666 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:36.543393 kubelet[2777]: E0909 00:21:36.543334 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:37.527050 containerd[1603]: time="2025-09-09T00:21:37.527003832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" id:\"83052c5df732e5525e9c080a522a72fc95988be76482aedfce20625e48331a2d\" pid:5385 exited_at:{seconds:1757377297 nanos:525109268}" Sep 9 00:21:38.153000 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 9 00:21:39.659961 containerd[1603]: time="2025-09-09T00:21:39.659892923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" id:\"d9717c391cf6185a080ff1ec4fa27104f2da1f3cfe6c121a7133158488b31050\" pid:5418 exited_at:{seconds:1757377299 nanos:659304490}" Sep 9 00:21:41.962572 containerd[1603]: time="2025-09-09T00:21:41.962451763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7df1030992b68a568f3a4c45e2c95a36fabfedc6eef420f7ecc9b0210787f51\" id:\"bff285e0aec12f329dc8c4095a7a664a77d5c2c38d76ab103283aac74d517157\" pid:5444 exited_at:{seconds:1757377301 nanos:961661449}" Sep 9 00:21:42.004964 sshd[4578]: Connection closed by 10.0.0.1 port 40952 Sep 9 00:21:42.003604 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:42.016147 systemd[1]: sshd@27-10.0.0.67:22-10.0.0.1:40952.service: Deactivated successfully. Sep 9 00:21:42.020098 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:21:42.024796 systemd-logind[1576]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:21:42.027233 systemd-logind[1576]: Removed session 28.