Sep 9 00:27:19.958930 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:13:49 -00 2025 Sep 9 00:27:19.958980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:27:19.958995 kernel: BIOS-provided physical RAM map: Sep 9 00:27:19.959004 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:27:19.959013 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:27:19.959022 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:27:19.959032 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:27:19.959041 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:27:19.959054 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:27:19.959066 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:27:19.959075 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:27:19.959084 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:27:19.959093 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:27:19.959102 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:27:19.959113 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:27:19.959126 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:27:19.959138 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:27:19.959148 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:27:19.959158 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:27:19.959167 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:27:19.959177 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:27:19.959187 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:27:19.959196 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:27:19.959206 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:27:19.959215 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:27:19.959228 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:27:19.959300 kernel: NX (Execute Disable) protection: active Sep 9 00:27:19.959310 kernel: APIC: Static calls initialized Sep 9 00:27:19.959319 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:27:19.959329 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:27:19.959339 kernel: extended physical RAM map: Sep 9 00:27:19.959349 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:27:19.959358 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:27:19.959368 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:27:19.959378 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:27:19.959387 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:27:19.959401 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:27:19.959431 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:27:19.959442 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:27:19.959452 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:27:19.959468 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:27:19.959478 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:27:19.959492 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:27:19.959502 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:27:19.959512 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:27:19.959524 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:27:19.959537 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:27:19.959550 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:27:19.959563 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:27:19.959576 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:27:19.959589 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:27:19.959601 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:27:19.959632 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:27:19.959645 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:27:19.959658 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:27:19.959670 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:27:19.959681 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:27:19.959691 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:27:19.959705 kernel: efi: EFI v2.7 by EDK II Sep 9 00:27:19.959715 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:27:19.959726 kernel: random: crng init done Sep 9 00:27:19.959738 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:27:19.959749 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:27:19.959765 kernel: secureboot: Secure boot disabled Sep 9 00:27:19.959775 kernel: SMBIOS 2.8 present. Sep 9 00:27:19.959785 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:27:19.959795 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:27:19.959806 kernel: Hypervisor detected: KVM Sep 9 00:27:19.959816 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:27:19.959826 kernel: kvm-clock: using sched offset of 5186586792 cycles Sep 9 00:27:19.959837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:27:19.959848 kernel: tsc: Detected 2794.750 MHz processor Sep 9 00:27:19.959859 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:27:19.959869 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:27:19.959882 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:27:19.959893 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:27:19.959903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:27:19.959914 kernel: Using GB pages for direct mapping Sep 9 00:27:19.959924 kernel: ACPI: Early table checksum verification disabled Sep 9 00:27:19.959935 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:27:19.959945 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:27:19.959956 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.959967 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.959981 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:27:19.959991 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.960002 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.960013 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.960023 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:27:19.960034 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:27:19.960044 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:27:19.960054 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:27:19.960068 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:27:19.960078 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:27:19.960088 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:27:19.960098 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:27:19.960108 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:27:19.960118 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:27:19.960128 kernel: No NUMA configuration found Sep 9 00:27:19.960138 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:27:19.960147 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:27:19.960157 kernel: Zone ranges: Sep 9 00:27:19.960172 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:27:19.960182 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:27:19.960192 kernel: Normal empty Sep 9 00:27:19.960202 kernel: Device empty Sep 9 00:27:19.960212 kernel: Movable zone start for each node Sep 9 00:27:19.960222 kernel: Early memory node ranges Sep 9 00:27:19.960247 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:27:19.960258 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:27:19.960272 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:27:19.960287 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:27:19.960296 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:27:19.960306 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:27:19.960316 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:27:19.960327 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:27:19.960337 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:27:19.960347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:27:19.960361 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:27:19.960383 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:27:19.960393 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:27:19.960404 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:27:19.960414 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:27:19.960429 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:27:19.960440 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:27:19.960452 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:27:19.960463 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:27:19.960474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:27:19.960489 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:27:19.960499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:27:19.960510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:27:19.960522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:27:19.960544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:27:19.960557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:27:19.960578 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:27:19.960590 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:27:19.960601 kernel: TSC deadline timer available Sep 9 00:27:19.960647 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:27:19.960662 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:27:19.960676 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:27:19.960689 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:27:19.960702 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:27:19.960716 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:27:19.960729 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:27:19.960743 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:27:19.960757 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:27:19.960770 kernel: kvm-guest: setup PV sched yield Sep 9 00:27:19.960790 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:27:19.960803 kernel: Booting paravirtualized kernel on KVM Sep 9 00:27:19.960816 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:27:19.960828 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:27:19.960845 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:27:19.960856 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:27:19.960867 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:27:19.960877 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:27:19.960892 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:27:19.960904 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:27:19.960920 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:27:19.960931 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:27:19.960942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:27:19.960952 kernel: Fallback order for Node 0: 0 Sep 9 00:27:19.960963 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:27:19.960973 kernel: Policy zone: DMA32 Sep 9 00:27:19.960984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:27:19.960998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:27:19.961009 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 00:27:19.961020 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:27:19.961031 kernel: Dynamic Preempt: voluntary Sep 9 00:27:19.961042 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:27:19.961054 kernel: rcu: RCU event tracing is enabled. Sep 9 00:27:19.961066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:27:19.961077 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:27:19.961087 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:27:19.961101 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:27:19.961112 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:27:19.961127 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:27:19.961138 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:27:19.961149 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:27:19.961159 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:27:19.961170 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:27:19.961181 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:27:19.961192 kernel: Console: colour dummy device 80x25 Sep 9 00:27:19.961206 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:27:19.961217 kernel: ACPI: Core revision 20240827 Sep 9 00:27:19.961228 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:27:19.961281 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:27:19.961292 kernel: x2apic enabled Sep 9 00:27:19.961303 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:27:19.961315 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:27:19.961326 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:27:19.961337 kernel: kvm-guest: setup PV IPIs Sep 9 00:27:19.961353 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:27:19.961364 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 00:27:19.961375 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 9 00:27:19.961386 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:27:19.961396 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:27:19.961407 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:27:19.961419 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:27:19.961429 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:27:19.961440 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:27:19.961454 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:27:19.961465 kernel: active return thunk: retbleed_return_thunk Sep 9 00:27:19.961476 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:27:19.961492 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:27:19.961503 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:27:19.961514 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:27:19.961525 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:27:19.961536 kernel: active return thunk: srso_return_thunk Sep 9 00:27:19.961551 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:27:19.961562 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:27:19.961573 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:27:19.961584 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:27:19.961595 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:27:19.961607 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:27:19.961631 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:27:19.961642 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:27:19.961653 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:27:19.961668 kernel: landlock: Up and running. Sep 9 00:27:19.961679 kernel: SELinux: Initializing. Sep 9 00:27:19.961690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:27:19.961700 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:27:19.961711 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:27:19.961721 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:27:19.961732 kernel: ... version: 0 Sep 9 00:27:19.961743 kernel: ... bit width: 48 Sep 9 00:27:19.961754 kernel: ... generic registers: 6 Sep 9 00:27:19.961769 kernel: ... value mask: 0000ffffffffffff Sep 9 00:27:19.961781 kernel: ... max period: 00007fffffffffff Sep 9 00:27:19.961791 kernel: ... fixed-purpose events: 0 Sep 9 00:27:19.961802 kernel: ... event mask: 000000000000003f Sep 9 00:27:19.961813 kernel: signal: max sigframe size: 1776 Sep 9 00:27:19.961824 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:27:19.961836 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:27:19.961852 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:27:19.961884 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:27:19.961922 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:27:19.961934 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:27:19.961945 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:27:19.961956 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 9 00:27:19.961968 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9960K rodata, 54036K init, 2932K bss, 137196K reserved, 0K cma-reserved) Sep 9 00:27:19.961979 kernel: devtmpfs: initialized Sep 9 00:27:19.961995 kernel: x86/mm: Memory block size: 128MB Sep 9 00:27:19.962006 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:27:19.962017 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:27:19.962032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:27:19.962043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:27:19.962054 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:27:19.962065 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:27:19.962076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:27:19.962087 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:27:19.962098 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:27:19.962109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:27:19.962120 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:27:19.962135 kernel: audit: type=2000 audit(1757377636.075:1): state=initialized audit_enabled=0 res=1 Sep 9 00:27:19.962145 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:27:19.962156 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:27:19.962166 kernel: cpuidle: using governor menu Sep 9 00:27:19.962177 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:27:19.962188 kernel: dca service started, version 1.12.1 Sep 9 00:27:19.962199 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:27:19.962210 kernel: PCI: Using configuration type 1 for base access Sep 9 00:27:19.962221 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:27:19.962255 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:27:19.962266 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:27:19.962278 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:27:19.962289 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:27:19.962300 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:27:19.962310 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:27:19.962321 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:27:19.962332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:27:19.962342 kernel: ACPI: Interpreter enabled Sep 9 00:27:19.962358 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:27:19.962368 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:27:19.962379 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:27:19.962390 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:27:19.962401 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:27:19.962412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:27:19.962730 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:27:19.962896 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:27:19.963060 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:27:19.963075 kernel: PCI host bridge to bus 0000:00 Sep 9 00:27:19.963250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:27:19.963401 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:27:19.963541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:27:19.963699 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:27:19.963842 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:27:19.963988 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:27:19.964134 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:27:19.964388 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:27:19.964580 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:27:19.964758 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:27:19.964921 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:27:19.965089 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:27:19.965275 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:27:19.965503 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:27:19.965712 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:27:19.965871 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:27:19.966038 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:27:19.966293 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:27:19.966470 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:27:19.966646 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:27:19.966810 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:27:19.967003 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:27:19.967168 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:27:19.967375 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:27:19.967532 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:27:19.967689 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:27:19.967857 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:27:19.968007 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:27:19.968166 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:27:19.968357 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:27:19.968502 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:27:19.968685 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:27:19.968849 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:27:19.968866 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:27:19.968878 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:27:19.968889 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:27:19.968900 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:27:19.968911 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:27:19.968922 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:27:19.968939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:27:19.968950 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:27:19.968961 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:27:19.968972 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:27:19.968984 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:27:19.968995 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:27:19.969006 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:27:19.969017 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:27:19.969029 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:27:19.969044 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:27:19.969055 kernel: iommu: Default domain type: Translated Sep 9 00:27:19.969067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:27:19.969078 kernel: efivars: Registered efivars operations Sep 9 00:27:19.969089 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:27:19.969100 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:27:19.969112 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:27:19.969123 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:27:19.969134 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:27:19.969149 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:27:19.969160 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:27:19.969171 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:27:19.969182 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:27:19.969192 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:27:19.969398 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:27:19.969562 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:27:19.969749 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:27:19.969773 kernel: vgaarb: loaded Sep 9 00:27:19.969785 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:27:19.969797 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:27:19.969808 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:27:19.969818 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:27:19.969829 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:27:19.969841 kernel: pnp: PnP ACPI init Sep 9 00:27:19.970092 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:27:19.970121 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:27:19.970133 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:27:19.970144 kernel: NET: Registered PF_INET protocol family Sep 9 00:27:19.970156 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:27:19.970167 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:27:19.970179 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:27:19.970191 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:27:19.970202 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:27:19.970213 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:27:19.970246 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:27:19.970260 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:27:19.970284 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:27:19.970296 kernel: NET: Registered PF_XDP protocol family Sep 9 00:27:19.970471 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:27:19.970648 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:27:19.970814 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:27:19.970970 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:27:19.971124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:27:19.971298 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:27:19.971481 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:27:19.971652 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:27:19.971670 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:27:19.971682 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 00:27:19.971695 kernel: Initialise system trusted keyrings Sep 9 00:27:19.971713 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:27:19.971725 kernel: Key type asymmetric registered Sep 9 00:27:19.971736 kernel: Asymmetric key parser 'x509' registered Sep 9 00:27:19.971748 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:27:19.971760 kernel: io scheduler mq-deadline registered Sep 9 00:27:19.971770 kernel: io scheduler kyber registered Sep 9 00:27:19.971782 kernel: io scheduler bfq registered Sep 9 00:27:19.971797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:27:19.971810 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:27:19.971822 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:27:19.971834 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:27:19.971845 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:27:19.971857 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:27:19.971869 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:27:19.971881 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:27:19.971892 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:27:19.972093 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:27:19.972113 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:27:19.972294 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:27:19.972460 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:27:19 UTC (1757377639) Sep 9 00:27:19.972633 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:27:19.972651 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:27:19.972663 kernel: efifb: probing for efifb Sep 9 00:27:19.972675 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:27:19.972693 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:27:19.972704 kernel: efifb: scrolling: redraw Sep 9 00:27:19.972716 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:27:19.972728 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:27:19.972739 kernel: fb0: EFI VGA frame buffer device Sep 9 00:27:19.972751 kernel: pstore: Using crash dump compression: deflate Sep 9 00:27:19.972763 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:27:19.972774 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:27:19.972786 kernel: Segment Routing with IPv6 Sep 9 00:27:19.972802 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:27:19.972814 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:27:19.972825 kernel: Key type dns_resolver registered Sep 9 00:27:19.972837 kernel: IPI shorthand broadcast: enabled Sep 9 00:27:19.972848 kernel: sched_clock: Marking stable (3904004042, 323068268)->(4255339701, -28267391) Sep 9 00:27:19.972860 kernel: registered taskstats version 1 Sep 9 00:27:19.972871 kernel: Loading compiled-in X.509 certificates Sep 9 00:27:19.972883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f610abecf8d2943295243a86f7aa958542b6f677' Sep 9 00:27:19.972894 kernel: Demotion targets for Node 0: null Sep 9 00:27:19.972910 kernel: Key type .fscrypt registered Sep 9 00:27:19.972922 kernel: Key type fscrypt-provisioning registered Sep 9 00:27:19.972933 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:27:19.972945 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:27:19.972956 kernel: ima: No architecture policies found Sep 9 00:27:19.972968 kernel: clk: Disabling unused clocks Sep 9 00:27:19.972979 kernel: Warning: unable to open an initial console. Sep 9 00:27:19.972991 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 9 00:27:19.973004 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:27:19.973019 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 9 00:27:19.973030 kernel: Run /init as init process Sep 9 00:27:19.973042 kernel: with arguments: Sep 9 00:27:19.973053 kernel: /init Sep 9 00:27:19.973064 kernel: with environment: Sep 9 00:27:19.973075 kernel: HOME=/ Sep 9 00:27:19.973086 kernel: TERM=linux Sep 9 00:27:19.973098 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:27:19.973110 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:27:19.973130 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:27:19.973143 systemd[1]: Detected virtualization kvm. Sep 9 00:27:19.973155 systemd[1]: Detected architecture x86-64. Sep 9 00:27:19.973167 systemd[1]: Running in initrd. Sep 9 00:27:19.973179 systemd[1]: No hostname configured, using default hostname. Sep 9 00:27:19.973191 systemd[1]: Hostname set to . Sep 9 00:27:19.973203 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:27:19.973217 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:27:19.973229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:27:19.973261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:27:19.973273 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:27:19.973285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:27:19.973296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:27:19.973309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:27:19.973329 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:27:19.973341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:27:19.973353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:27:19.973365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:27:19.973377 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:27:19.973389 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:27:19.973401 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:27:19.973413 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:27:19.973428 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:27:19.973441 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:27:19.973453 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:27:19.973464 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:27:19.973477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:27:19.973489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:27:19.973501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:27:19.973513 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:27:19.973525 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:27:19.973541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:27:19.973553 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:27:19.973564 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:27:19.973575 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:27:19.973586 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:27:19.973597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:27:19.973608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:19.973629 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:27:19.973644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:27:19.973655 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:27:19.973696 systemd-journald[219]: Collecting audit messages is disabled. Sep 9 00:27:19.973726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:27:19.973737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:19.973748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:27:19.973761 systemd-journald[219]: Journal started Sep 9 00:27:19.973789 systemd-journald[219]: Runtime Journal (/run/log/journal/52e341c07db44cd3a528540cb1e35b99) is 6M, max 48.4M, 42.4M free. Sep 9 00:27:19.959009 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 00:27:19.979263 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:27:19.985546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:27:19.989122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:27:19.991357 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:27:19.998294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:27:20.002008 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 00:27:20.002253 kernel: Bridge firewalling registered Sep 9 00:27:20.007176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:27:20.011510 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:27:20.013300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:27:20.021597 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:27:20.028998 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:27:20.032338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:27:20.034478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:27:20.042451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:27:20.047844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:27:20.061903 dracut-cmdline[256]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:27:20.116430 systemd-resolved[265]: Positive Trust Anchors: Sep 9 00:27:20.116463 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:27:20.116501 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:27:20.119983 systemd-resolved[265]: Defaulting to hostname 'linux'. Sep 9 00:27:20.121642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:27:20.128152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:27:20.204291 kernel: SCSI subsystem initialized Sep 9 00:27:20.217344 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:27:20.229278 kernel: iscsi: registered transport (tcp) Sep 9 00:27:20.253291 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:27:20.253376 kernel: QLogic iSCSI HBA Driver Sep 9 00:27:20.279261 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:27:20.311188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:27:20.315438 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:27:20.379413 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:27:20.383721 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:27:20.459315 kernel: raid6: avx2x4 gen() 20978 MB/s Sep 9 00:27:20.476289 kernel: raid6: avx2x2 gen() 19250 MB/s Sep 9 00:27:20.493635 kernel: raid6: avx2x1 gen() 15900 MB/s Sep 9 00:27:20.493727 kernel: raid6: using algorithm avx2x4 gen() 20978 MB/s Sep 9 00:27:20.511918 kernel: raid6: .... xor() 5054 MB/s, rmw enabled Sep 9 00:27:20.512034 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:27:20.546400 kernel: xor: automatically using best checksumming function avx Sep 9 00:27:20.757317 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:27:20.772894 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:27:20.777014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:27:20.811661 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 00:27:20.818346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:27:20.823046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:27:20.853744 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 9 00:27:20.883655 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:27:20.887437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:27:20.977820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:27:20.981691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:27:21.026261 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:27:21.026491 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:27:21.034462 kernel: AES CTR mode by8 optimization enabled Sep 9 00:27:21.048814 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:27:21.096358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:27:21.096481 kernel: GPT:9289727 != 19775487 Sep 9 00:27:21.096510 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:27:21.096541 kernel: GPT:9289727 != 19775487 Sep 9 00:27:21.096571 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:27:21.096610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:27:21.079724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:27:21.079868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:21.105769 kernel: libata version 3.00 loaded. Sep 9 00:27:21.103319 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:21.107659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:21.109519 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:27:21.139272 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:27:21.142827 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:27:21.143095 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:27:21.143123 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:27:21.147729 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:27:21.155089 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:27:21.155347 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:27:21.155535 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:27:21.155793 kernel: scsi host0: ahci Sep 9 00:27:21.155981 kernel: scsi host1: ahci Sep 9 00:27:21.156129 kernel: scsi host2: ahci Sep 9 00:27:21.156309 kernel: scsi host3: ahci Sep 9 00:27:21.156526 kernel: scsi host4: ahci Sep 9 00:27:21.156699 kernel: scsi host5: ahci Sep 9 00:27:21.162625 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:27:21.162654 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:27:21.164432 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:27:21.164457 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:27:21.166025 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:27:21.170315 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:27:21.170333 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:27:21.177500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:27:21.198880 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:27:21.206813 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:27:21.206911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:27:21.206981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:21.213301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:21.222964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:21.241750 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:27:21.266778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:21.465705 disk-uuid[635]: Primary Header is updated. Sep 9 00:27:21.465705 disk-uuid[635]: Secondary Entries is updated. Sep 9 00:27:21.465705 disk-uuid[635]: Secondary Header is updated. Sep 9 00:27:21.471280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:27:21.475260 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:27:21.476271 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:27:21.477271 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:27:21.477301 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:27:21.478589 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:27:21.478620 kernel: ata3.00: applying bridge limits Sep 9 00:27:21.480284 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:27:21.482394 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:27:21.482438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:27:21.484085 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:27:21.484194 kernel: ata3.00: configured for UDMA/100 Sep 9 00:27:21.484215 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:27:21.487508 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:27:21.538783 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:27:21.539140 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:27:21.551309 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:27:21.943906 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:27:21.951307 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:27:21.954063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:27:21.956666 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:27:21.960738 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:27:22.000667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:27:22.492360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:27:22.494081 disk-uuid[641]: The operation has completed successfully. Sep 9 00:27:22.536438 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:27:22.536635 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:27:22.577068 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:27:22.608709 sh[670]: Success Sep 9 00:27:22.632342 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:27:22.632427 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:27:22.632441 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:27:22.646279 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:27:22.679732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:27:22.684461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:27:22.706628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:27:22.714952 kernel: BTRFS: device fsid eee400a1-88b9-480b-9c0c-54d171140f9a devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (682) Sep 9 00:27:22.715007 kernel: BTRFS info (device dm-0): first mount of filesystem eee400a1-88b9-480b-9c0c-54d171140f9a Sep 9 00:27:22.715023 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:27:22.725310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:27:22.725407 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:27:22.726929 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:27:22.729464 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:27:22.731877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:27:22.734906 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:27:22.737937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:27:22.770292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Sep 9 00:27:22.773344 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:27:22.773377 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:27:22.776400 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:27:22.776426 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:27:22.783263 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:27:22.783909 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:27:22.788204 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:27:22.893957 ignition[757]: Ignition 2.21.0 Sep 9 00:27:22.894437 ignition[757]: Stage: fetch-offline Sep 9 00:27:22.896194 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:22.896215 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:22.896449 ignition[757]: parsed url from cmdline: "" Sep 9 00:27:22.896455 ignition[757]: no config URL provided Sep 9 00:27:22.896462 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:27:22.896473 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:27:22.896502 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 9 00:27:22.896511 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:27:22.907071 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 9 00:27:22.924876 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:27:22.931470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:27:22.981868 ignition[757]: parsing config with SHA512: 96ce8986bee64d7f9fc6b1f8a0fb457499e976ce249f8d9daf2a7117b59158e08bca615396743f96e9c34427be24a9157bdbc17d9dd7599f0d2abcd1f199fcae Sep 9 00:27:22.986811 unknown[757]: fetched base config from "system" Sep 9 00:27:22.987060 unknown[757]: fetched user config from "qemu" Sep 9 00:27:23.005937 systemd-networkd[860]: lo: Link UP Sep 9 00:27:23.005951 systemd-networkd[860]: lo: Gained carrier Sep 9 00:27:23.007737 systemd-networkd[860]: Enumeration completed Sep 9 00:27:23.007896 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:27:23.008129 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:27:23.560989 ignition[757]: fetch-offline: fetch-offline passed Sep 9 00:27:23.008134 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:27:23.561115 ignition[757]: Ignition finished successfully Sep 9 00:27:23.558293 systemd[1]: Reached target network.target - Network. Sep 9 00:27:23.558838 systemd-networkd[860]: eth0: Link UP Sep 9 00:27:23.560619 systemd-networkd[860]: eth0: Gained carrier Sep 9 00:27:23.560644 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:27:23.564688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:27:23.567578 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:27:23.568729 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:27:23.586403 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:27:23.646888 ignition[864]: Ignition 2.21.0 Sep 9 00:27:23.646910 ignition[864]: Stage: kargs Sep 9 00:27:23.647347 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:23.647363 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:23.650610 ignition[864]: kargs: kargs passed Sep 9 00:27:23.650719 ignition[864]: Ignition finished successfully Sep 9 00:27:23.655050 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:27:23.657745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:27:23.706019 ignition[873]: Ignition 2.21.0 Sep 9 00:27:23.706031 ignition[873]: Stage: disks Sep 9 00:27:23.706168 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:23.706179 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:23.707637 ignition[873]: disks: disks passed Sep 9 00:27:23.707706 ignition[873]: Ignition finished successfully Sep 9 00:27:23.714422 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:27:23.715921 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:27:23.718193 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:27:23.721720 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:27:23.721839 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:27:23.723758 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:27:23.727846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:27:23.774621 systemd-resolved[265]: Detected conflict on linux IN A 10.0.0.55 Sep 9 00:27:23.774640 systemd-resolved[265]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 9 00:27:23.776707 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:27:23.786197 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:27:23.790313 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:27:23.971292 kernel: EXT4-fs (vda9): mounted filesystem 91c315eb-0fc3-4e95-bf9b-06acc06be6bc r/w with ordered data mode. Quota mode: none. Sep 9 00:27:23.972582 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:27:23.974654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:27:23.978721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:27:23.981327 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:27:23.982653 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:27:23.982703 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:27:23.982730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:27:24.005592 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:27:24.009531 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:27:24.015019 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 9 00:27:24.015053 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:27:24.015067 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:27:24.017856 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:27:24.017915 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:27:24.020018 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:27:24.082384 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:27:24.088031 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:27:24.093760 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:27:24.101855 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:27:24.218428 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:27:24.222608 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:27:24.226303 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:27:24.250287 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:27:24.250435 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:27:24.269109 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:27:24.294331 ignition[1005]: INFO : Ignition 2.21.0 Sep 9 00:27:24.294331 ignition[1005]: INFO : Stage: mount Sep 9 00:27:24.298115 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:24.298115 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:24.300461 ignition[1005]: INFO : mount: mount passed Sep 9 00:27:24.300461 ignition[1005]: INFO : Ignition finished successfully Sep 9 00:27:24.305304 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:27:24.309711 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:27:24.684536 systemd-networkd[860]: eth0: Gained IPv6LL Sep 9 00:27:24.974649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:27:25.003288 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Sep 9 00:27:25.005737 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:27:25.005764 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:27:25.009263 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:27:25.009286 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:27:25.011154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:27:25.065258 ignition[1035]: INFO : Ignition 2.21.0 Sep 9 00:27:25.065258 ignition[1035]: INFO : Stage: files Sep 9 00:27:25.067092 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:25.067092 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:25.070773 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:27:25.073008 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:27:25.073008 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:27:25.077460 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:27:25.078950 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:27:25.078950 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:27:25.078167 unknown[1035]: wrote ssh authorized keys file for user: core Sep 9 00:27:25.083210 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:27:25.083210 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 00:27:25.140503 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:27:25.505863 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:27:25.505863 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:27:25.510507 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:27:25.625528 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:27:25.814380 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:27:25.814380 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:27:25.826200 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:27:25.963281 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:27:26.009571 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:27:26.009571 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:27:26.134301 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:27:26.134301 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:27:26.165322 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 00:27:26.415284 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:27:27.480035 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:27:27.480035 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:27:27.502415 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:27:27.938742 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:27:27.938742 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:27:27.938742 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:27:27.938742 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:27:27.938742 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:27:27.954650 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:27:27.954650 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:27:27.990213 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:27:28.001630 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:27:28.003344 ignition[1035]: INFO : files: files passed Sep 9 00:27:28.003344 ignition[1035]: INFO : Ignition finished successfully Sep 9 00:27:28.010754 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:27:28.013578 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:27:28.017439 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:27:28.036763 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:27:28.037069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:27:28.042365 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:27:28.046964 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:27:28.046964 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:27:28.051753 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:27:28.049941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:27:28.051980 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:27:28.055692 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:27:28.122049 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:27:28.122217 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:27:28.132995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:27:28.136356 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:27:28.137721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:27:28.139178 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:27:28.186039 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:27:28.188345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:27:28.222113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:27:28.225420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:27:28.227018 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:27:28.229641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:27:28.229960 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:27:28.233911 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:27:28.236330 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:27:28.236981 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:27:28.240095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:27:28.244600 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:27:28.244824 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:27:28.247903 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:27:28.250828 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:27:28.254063 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:27:28.256379 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:27:28.256846 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:27:28.257260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:27:28.257432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:27:28.258542 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:27:28.258931 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:27:28.259260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:27:28.259422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:27:28.259897 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:27:28.260207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:27:28.275839 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:27:28.276043 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:27:28.278329 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:27:28.279387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:27:28.283395 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:27:28.284842 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:27:28.288079 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:27:28.289318 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:27:28.289459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:27:28.290186 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:27:28.290305 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:27:28.294228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:27:28.294436 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:27:28.301565 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:27:28.301767 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:27:28.308145 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:27:28.308956 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:27:28.309160 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:27:28.318015 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:27:28.319070 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:27:28.319264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:27:28.320357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:27:28.320502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:27:28.329404 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:27:28.329559 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:27:28.352616 ignition[1090]: INFO : Ignition 2.21.0 Sep 9 00:27:28.355325 ignition[1090]: INFO : Stage: umount Sep 9 00:27:28.355325 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:27:28.355325 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:27:28.354426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:27:28.363428 ignition[1090]: INFO : umount: umount passed Sep 9 00:27:28.363428 ignition[1090]: INFO : Ignition finished successfully Sep 9 00:27:28.362814 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:27:28.362969 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:27:28.364977 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:27:28.365125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:27:28.367291 systemd[1]: Stopped target network.target - Network. Sep 9 00:27:28.368815 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:27:28.368907 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:27:28.371069 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:27:28.371167 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:27:28.373086 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:27:28.373159 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:27:28.375001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:27:28.375054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:27:28.376033 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:27:28.376089 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:27:28.376775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:27:28.380434 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:27:28.390160 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:27:28.390534 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:27:28.395212 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:27:28.395583 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:27:28.395634 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:27:28.402172 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:27:28.402567 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:27:28.402726 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:27:28.406962 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:27:28.408442 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:27:28.410785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:27:28.410848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:27:28.414293 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:27:28.433716 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:27:28.433850 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:27:28.436400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:27:28.436462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:27:28.439135 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:27:28.439196 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:27:28.441344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:27:28.442770 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:27:28.460269 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:27:28.460514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:27:28.463216 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:27:28.463368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:27:28.465217 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:27:28.465338 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:27:28.466486 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:27:28.466541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:27:28.466854 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:27:28.466923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:27:28.473139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:27:28.473216 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:27:28.477189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:27:28.477278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:27:28.483001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:27:28.484441 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:27:28.484508 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:27:28.489392 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:27:28.489462 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:27:28.508009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:27:28.508075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:28.525512 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:27:28.525653 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:27:28.528587 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:27:28.532168 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:27:28.565469 systemd[1]: Switching root. Sep 9 00:27:28.602429 systemd-journald[219]: Journal stopped Sep 9 00:27:31.538966 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 9 00:27:31.539045 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:27:31.539063 kernel: SELinux: policy capability open_perms=1 Sep 9 00:27:31.545323 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:27:31.545369 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:27:31.545382 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:27:31.545394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:27:31.545407 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:27:31.545418 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:27:31.545431 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:27:31.545454 kernel: audit: type=1403 audit(1757377650.312:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:27:31.545470 systemd[1]: Successfully loaded SELinux policy in 113.543ms. Sep 9 00:27:31.545492 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.314ms. Sep 9 00:27:31.545506 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:27:31.545519 systemd[1]: Detected virtualization kvm. Sep 9 00:27:31.545532 systemd[1]: Detected architecture x86-64. Sep 9 00:27:31.545544 systemd[1]: Detected first boot. Sep 9 00:27:31.545557 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:27:31.545569 zram_generator::config[1136]: No configuration found. Sep 9 00:27:31.545585 kernel: Guest personality initialized and is inactive Sep 9 00:27:31.545597 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:27:31.545609 kernel: Initialized host personality Sep 9 00:27:31.545621 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:27:31.545632 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:27:31.545646 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:27:31.545664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:27:31.545676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:27:31.545688 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:27:31.545703 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:27:31.545715 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:27:31.545728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:27:31.545740 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:27:31.545758 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:27:31.545771 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:27:31.545783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:27:31.545796 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:27:31.545810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:27:31.545823 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:27:31.545836 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:27:31.545887 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:27:31.545900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:27:31.545913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:27:31.545925 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:27:31.545944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:27:31.545958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:27:31.545970 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:27:31.545982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:27:31.545995 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:27:31.546008 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:27:31.546020 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:27:31.546032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:27:31.546045 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:27:31.546057 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:27:31.546072 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:27:31.546084 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:27:31.546097 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:27:31.546109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:27:31.546122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:27:31.546134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:27:31.546147 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:27:31.546168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:27:31.546181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:27:31.546196 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:27:31.546208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:31.546221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:27:31.546362 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:27:31.546382 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:27:31.546398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:27:31.546414 systemd[1]: Reached target machines.target - Containers. Sep 9 00:27:31.546428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:27:31.546452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:27:31.546472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:27:31.546489 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:27:31.546505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:27:31.546519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:27:31.546532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:27:31.546546 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:27:31.546561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:27:31.546577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:27:31.546596 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:27:31.546611 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:27:31.546637 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:27:31.546649 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:27:31.546662 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:27:31.546675 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:27:31.546688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:27:31.546700 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:27:31.546713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:27:31.546728 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:27:31.546787 systemd-journald[1200]: Collecting audit messages is disabled. Sep 9 00:27:31.546816 kernel: fuse: init (API version 7.41) Sep 9 00:27:31.546831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:27:31.546843 kernel: loop: module loaded Sep 9 00:27:31.546855 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:27:31.546868 systemd[1]: Stopped verity-setup.service. Sep 9 00:27:31.546880 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:31.546894 systemd-journald[1200]: Journal started Sep 9 00:27:31.546921 systemd-journald[1200]: Runtime Journal (/run/log/journal/52e341c07db44cd3a528540cb1e35b99) is 6M, max 48.4M, 42.4M free. Sep 9 00:27:31.100746 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:27:31.112580 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:27:31.113079 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:27:31.553270 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:27:31.556223 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:27:31.557637 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:27:31.559032 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:27:31.560314 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:27:31.561713 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:27:31.563124 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:27:31.564747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:27:31.566727 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:27:31.567097 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:27:31.568978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:27:31.569287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:27:31.570264 kernel: ACPI: bus type drm_connector registered Sep 9 00:27:31.571375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:27:31.571667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:27:31.573846 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:27:31.581692 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:27:31.606719 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:27:31.607052 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:27:31.608737 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:27:31.609041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:27:31.611060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:27:31.612941 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:27:31.615379 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:27:31.617355 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:27:31.634520 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:27:31.661077 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:27:31.664493 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:27:31.665937 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:27:31.665981 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:27:31.668123 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:27:31.701399 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:27:31.707385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:27:31.762400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:27:31.765879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:27:31.767660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:27:31.776962 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:27:31.778464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:27:31.782505 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:27:31.787721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:27:31.789369 systemd-journald[1200]: Time spent on flushing to /var/log/journal/52e341c07db44cd3a528540cb1e35b99 is 28.076ms for 1074 entries. Sep 9 00:27:31.789369 systemd-journald[1200]: System Journal (/var/log/journal/52e341c07db44cd3a528540cb1e35b99) is 8M, max 195.6M, 187.6M free. Sep 9 00:27:32.344492 systemd-journald[1200]: Received client request to flush runtime journal. Sep 9 00:27:32.344548 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 00:27:32.344564 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:27:32.344577 kernel: loop1: detected capacity change from 0 to 111000 Sep 9 00:27:32.344590 kernel: loop2: detected capacity change from 0 to 224512 Sep 9 00:27:32.344603 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 00:27:32.344616 kernel: loop4: detected capacity change from 0 to 111000 Sep 9 00:27:32.344628 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 00:27:32.344641 zram_generator::config[1281]: No configuration found. Sep 9 00:27:31.792939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:27:31.794743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:27:31.796288 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:27:31.972190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:27:32.241075 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:27:32.241761 (sd-merge)[1257]: Merged extensions into '/usr'. Sep 9 00:27:32.248267 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:27:32.248278 systemd[1]: Reloading... Sep 9 00:27:32.593055 systemd[1]: Reloading finished in 344 ms. Sep 9 00:27:32.618760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:27:32.620925 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:27:32.622664 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:27:32.624566 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:27:32.633732 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:27:32.642914 systemd[1]: Starting ensure-sysext.service... Sep 9 00:27:32.661733 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:27:32.668463 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:27:32.683217 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:27:32.740955 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:27:32.740978 systemd[1]: Reloading... Sep 9 00:27:32.826348 zram_generator::config[1364]: No configuration found. Sep 9 00:27:33.060114 systemd[1]: Reloading finished in 318 ms. Sep 9 00:27:33.083002 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:27:33.122495 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:27:33.134267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:27:33.137017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:27:33.140137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.140977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:27:33.145514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:27:33.173025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:27:33.184589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:27:33.186058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:27:33.186225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:27:33.186435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.197056 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:27:33.197421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:27:33.202541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:27:33.202941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:27:33.209750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.210010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:27:33.214587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:27:33.220213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:27:33.237731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:27:33.238265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:27:33.238545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.244160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:27:33.244639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:27:33.247057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:27:33.247488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:27:33.250944 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:27:33.251263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:27:33.254030 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:27:33.254101 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:27:33.255466 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:27:33.256113 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:27:33.258419 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:27:33.258887 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Sep 9 00:27:33.258984 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Sep 9 00:27:33.259410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.260169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:27:33.260422 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 9 00:27:33.260437 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 9 00:27:33.263030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:27:33.281744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:27:33.284674 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:27:33.301080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:27:33.303497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:27:33.303853 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:27:33.304357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:27:33.306951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:27:33.346043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:27:33.346378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:27:33.348877 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:27:33.349358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:27:33.351656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:27:33.352006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:27:33.370657 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:27:33.371007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:27:33.377504 systemd[1]: Finished ensure-sysext.service. Sep 9 00:27:33.383056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:27:33.383156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:27:33.387929 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:27:33.387951 systemd-tmpfiles[1404]: Skipping /boot Sep 9 00:27:33.399962 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:27:33.399983 systemd-tmpfiles[1404]: Skipping /boot Sep 9 00:27:33.414299 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:27:33.415484 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:27:33.432892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:27:33.437325 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:27:33.443489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:27:33.458931 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:27:33.466034 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:27:33.477591 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:27:33.482692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:27:33.487344 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:27:33.502919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:27:33.507456 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:27:33.515550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:27:33.555507 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:27:33.559144 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:27:33.565453 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:27:33.567155 augenrules[1461]: No rules Sep 9 00:27:33.566724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:27:33.569212 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:27:33.569803 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:27:33.587614 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:27:33.596895 systemd-udevd[1452]: Using default interface naming scheme 'v255'. Sep 9 00:27:33.609730 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:27:33.630910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:27:33.649138 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:27:33.846596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:27:33.867739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:27:33.902392 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:27:33.928990 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:27:33.931442 systemd-networkd[1479]: lo: Link UP Sep 9 00:27:33.931462 systemd-networkd[1479]: lo: Gained carrier Sep 9 00:27:33.933827 systemd-networkd[1479]: Enumeration completed Sep 9 00:27:33.934381 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:27:33.934397 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:27:33.941601 systemd-networkd[1479]: eth0: Link UP Sep 9 00:27:33.972228 systemd-networkd[1479]: eth0: Gained carrier Sep 9 00:27:33.972323 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:27:33.973971 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:27:33.998222 systemd-resolved[1434]: Positive Trust Anchors: Sep 9 00:27:33.998259 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:27:33.998303 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:27:34.001443 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:27:34.006382 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:27:34.007283 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Sep 9 00:27:34.573210 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:27:34.573271 systemd-timesyncd[1439]: Initial clock synchronization to Tue 2025-09-09 00:27:34.573086 UTC. Sep 9 00:27:34.575405 systemd-resolved[1434]: Defaulting to hostname 'linux'. Sep 9 00:27:34.585854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:27:34.588358 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:27:34.597632 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:27:34.598874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:27:34.601741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:27:34.609027 systemd[1]: Reached target network.target - Network. Sep 9 00:27:34.611612 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:27:34.641032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:27:34.642549 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:27:34.643987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:27:34.645502 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:27:34.647179 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:27:34.648778 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:27:34.650488 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:27:34.650544 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:27:34.651870 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:27:34.653551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:27:34.655231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:27:34.656990 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:27:34.659712 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:27:34.663690 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:27:34.669212 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:27:34.671203 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:27:34.673556 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:27:34.681046 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:27:34.683048 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:27:34.707972 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:27:34.710067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:27:34.715522 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:27:34.731581 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:27:34.733409 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:27:34.733470 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:27:34.738807 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:27:34.743900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:27:34.752085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:27:34.755225 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:27:34.762021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:27:34.766930 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:27:34.770862 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:27:34.774392 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:27:34.777130 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:27:34.781565 jq[1542]: false Sep 9 00:27:34.805413 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:27:34.830408 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache Sep 9 00:27:34.806087 oslogin_cache_refresh[1544]: Refreshing passwd entry cache Sep 9 00:27:34.834434 extend-filesystems[1543]: Found /dev/vda6 Sep 9 00:27:34.836497 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting Sep 9 00:27:34.836497 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:27:34.836497 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache Sep 9 00:27:34.835809 oslogin_cache_refresh[1544]: Failure getting users, quitting Sep 9 00:27:34.835838 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:27:34.835916 oslogin_cache_refresh[1544]: Refreshing group entry cache Sep 9 00:27:34.837878 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:27:34.855536 extend-filesystems[1543]: Found /dev/vda9 Sep 9 00:27:34.860846 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting Sep 9 00:27:34.860846 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:27:34.860810 oslogin_cache_refresh[1544]: Failure getting groups, quitting Sep 9 00:27:34.860826 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:27:34.864173 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:27:34.891998 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:27:34.892984 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:27:34.893856 extend-filesystems[1543]: Checking size of /dev/vda9 Sep 9 00:27:34.903931 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:27:34.910368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:27:34.915001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:27:34.916950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:27:34.917304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:27:34.917913 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:27:34.918242 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:27:34.920516 jq[1566]: true Sep 9 00:27:34.920955 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:27:34.921254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:27:34.924479 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:27:34.924841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:27:34.974117 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:27:34.974734 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:27:34.974981 extend-filesystems[1543]: Resized partition /dev/vda9 Sep 9 00:27:34.975105 update_engine[1562]: I20250909 00:27:34.954450 1562 main.cc:92] Flatcar Update Engine starting Sep 9 00:27:34.968560 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:27:34.977386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:27:34.977653 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:27:34.991821 jq[1569]: true Sep 9 00:27:35.038996 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:27:35.097017 extend-filesystems[1604]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:27:35.135958 kernel: kvm_amd: TSC scaling supported Sep 9 00:27:35.136148 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:27:35.136176 kernel: kvm_amd: Nested Paging enabled Sep 9 00:27:35.136192 kernel: kvm_amd: LBR virtualization supported Sep 9 00:27:35.137127 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:27:35.137224 kernel: kvm_amd: Virtual GIF supported Sep 9 00:27:35.171786 tar[1568]: linux-amd64/LICENSE Sep 9 00:27:35.171786 tar[1568]: linux-amd64/helm Sep 9 00:27:35.174623 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:27:35.604154 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:27:35.604188 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:27:35.608177 systemd-logind[1555]: New seat seat0. Sep 9 00:27:35.626016 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:27:35.672898 dbus-daemon[1538]: [system] SELinux support is enabled Sep 9 00:27:35.673170 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:27:35.676526 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:27:35.676563 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:27:35.676731 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:27:35.676750 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:27:35.696259 update_engine[1562]: I20250909 00:27:35.696187 1562 update_check_scheduler.cc:74] Next update check in 4m28s Sep 9 00:27:35.696426 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:27:35.696617 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:27:35.705898 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:27:36.088478 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:27:37.791351 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:27:37.791954 containerd[1585]: time="2025-09-09T00:27:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:27:36.124174 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:27:37.792767 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:27:37.792767 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:27:37.792767 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:27:36.175104 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:27:37.846040 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.793791613Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815545712Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.429µs" Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815617476Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815645840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815896249Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815913371Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.815953336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.816049617Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.816065827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.816468943Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.816486876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847316 containerd[1585]: time="2025-09-09T00:27:37.816499891Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:27:37.847654 tar[1568]: linux-amd64/README.md Sep 9 00:27:36.178043 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.816510270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.816749840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817065231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817097511Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817107440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817172111Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817876392Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:27:37.848180 containerd[1585]: time="2025-09-09T00:27:37.817987901Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:27:36.199469 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:27:36.199871 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:27:36.202151 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:27:36.319628 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:27:36.322328 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:27:36.404236 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:27:36.404536 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:27:36.575926 systemd-networkd[1479]: eth0: Gained IPv6LL Sep 9 00:27:36.580744 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:27:36.581475 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:27:36.597779 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:27:36.666818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:27:36.669843 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:27:36.687602 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:27:36.703126 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:27:36.703427 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:27:36.703899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:27:36.718042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:27:37.794856 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:27:37.795221 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:27:37.880716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:27:37.915674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:27:38.250348 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:27:38.252438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:27:38.260525 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:27:38.261791 containerd[1585]: time="2025-09-09T00:27:38.261693158Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261829834Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261854962Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261872675Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261891360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261907149Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261926977Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:27:38.261954 containerd[1585]: time="2025-09-09T00:27:38.261944560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:27:38.262403 containerd[1585]: time="2025-09-09T00:27:38.261962052Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:27:38.262403 containerd[1585]: time="2025-09-09T00:27:38.261977992Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:27:38.262403 containerd[1585]: time="2025-09-09T00:27:38.261993311Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:27:38.262403 containerd[1585]: time="2025-09-09T00:27:38.262022626Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262763976Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262818829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262843134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262867230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262884903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262901834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262917534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262931690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262946628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262970393Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.262985992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.263102881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.263125854Z" level=info msg="Start snapshots syncer" Sep 9 00:27:38.263731 containerd[1585]: time="2025-09-09T00:27:38.263175758Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:27:38.264168 containerd[1585]: time="2025-09-09T00:27:38.263576129Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:27:38.264168 containerd[1585]: time="2025-09-09T00:27:38.263704229Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:27:38.266410 containerd[1585]: time="2025-09-09T00:27:38.266108797Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:27:38.266478 containerd[1585]: time="2025-09-09T00:27:38.266452472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:27:38.266543 containerd[1585]: time="2025-09-09T00:27:38.266493579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:27:38.266543 containerd[1585]: time="2025-09-09T00:27:38.266511042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:27:38.266543 containerd[1585]: time="2025-09-09T00:27:38.266524958Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266558170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266574431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266621068Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266679988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266699846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266716036Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266759558Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266783533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266797298Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266810804Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266821854Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266835310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266852161Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:27:38.266914 containerd[1585]: time="2025-09-09T00:27:38.266893829Z" level=info msg="runtime interface created" Sep 9 00:27:38.267293 containerd[1585]: time="2025-09-09T00:27:38.266903818Z" level=info msg="created NRI interface" Sep 9 00:27:38.267293 containerd[1585]: time="2025-09-09T00:27:38.266915740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:27:38.267293 containerd[1585]: time="2025-09-09T00:27:38.266939455Z" level=info msg="Connect containerd service" Sep 9 00:27:38.267293 containerd[1585]: time="2025-09-09T00:27:38.267015518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:27:38.268358 containerd[1585]: time="2025-09-09T00:27:38.268303893Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487447538Z" level=info msg="Start subscribing containerd event" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487547094Z" level=info msg="Start recovering state" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487741589Z" level=info msg="Start event monitor" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487768670Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487785070Z" level=info msg="Start streaming server" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487799658Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487814075Z" level=info msg="runtime interface starting up..." Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487821940Z" level=info msg="starting plugins..." Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487852667Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:27:38.488017 containerd[1585]: time="2025-09-09T00:27:38.487799728Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:27:38.488499 containerd[1585]: time="2025-09-09T00:27:38.488167628Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:27:38.488499 containerd[1585]: time="2025-09-09T00:27:38.488331615Z" level=info msg="containerd successfully booted in 1.572820s" Sep 9 00:27:38.488573 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:27:39.016250 sshd[1618]: Connection closed by authenticating user core 10.0.0.1 port 51970 [preauth] Sep 9 00:27:39.020343 systemd[1]: sshd@0-10.0.0.55:22-10.0.0.1:51970.service: Deactivated successfully. Sep 9 00:27:39.607452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:27:39.669668 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:27:39.671791 systemd[1]: Startup finished in 3.970s (kernel) + 10.603s (initrd) + 8.908s (userspace) = 23.482s. Sep 9 00:27:39.678429 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:27:40.918408 kubelet[1690]: E0909 00:27:40.918310 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:27:40.923351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:27:40.923656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:27:40.924112 systemd[1]: kubelet.service: Consumed 1.769s CPU time, 265M memory peak. Sep 9 00:27:49.043618 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:41542.service - OpenSSH per-connection server daemon (10.0.0.1:41542). Sep 9 00:27:49.124372 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 41542 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:49.127400 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:49.142924 systemd-logind[1555]: New session 1 of user core. Sep 9 00:27:49.144666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:27:49.146353 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:27:49.180816 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:27:49.184327 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:27:49.208838 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:27:49.212273 systemd-logind[1555]: New session c1 of user core. Sep 9 00:27:49.424682 systemd[1708]: Queued start job for default target default.target. Sep 9 00:27:49.437661 systemd[1708]: Created slice app.slice - User Application Slice. Sep 9 00:27:49.437700 systemd[1708]: Reached target paths.target - Paths. Sep 9 00:27:49.437757 systemd[1708]: Reached target timers.target - Timers. Sep 9 00:27:49.439773 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:27:49.453715 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:27:49.453898 systemd[1708]: Reached target sockets.target - Sockets. Sep 9 00:27:49.453958 systemd[1708]: Reached target basic.target - Basic System. Sep 9 00:27:49.454018 systemd[1708]: Reached target default.target - Main User Target. Sep 9 00:27:49.454065 systemd[1708]: Startup finished in 232ms. Sep 9 00:27:49.454269 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:27:49.456090 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:27:49.523485 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:41552.service - OpenSSH per-connection server daemon (10.0.0.1:41552). Sep 9 00:27:49.605875 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:49.607938 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:49.614027 systemd-logind[1555]: New session 2 of user core. Sep 9 00:27:49.631885 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:27:49.688109 sshd[1722]: Connection closed by 10.0.0.1 port 41552 Sep 9 00:27:49.688564 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:49.703440 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:41552.service: Deactivated successfully. Sep 9 00:27:49.705807 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:27:49.706686 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:27:49.710144 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:41558.service - OpenSSH per-connection server daemon (10.0.0.1:41558). Sep 9 00:27:49.710935 systemd-logind[1555]: Removed session 2. Sep 9 00:27:49.786864 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 41558 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:49.789004 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:49.795361 systemd-logind[1555]: New session 3 of user core. Sep 9 00:27:49.813037 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:27:49.863970 sshd[1731]: Connection closed by 10.0.0.1 port 41558 Sep 9 00:27:49.864326 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:49.882820 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:41558.service: Deactivated successfully. Sep 9 00:27:49.885369 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:27:49.886399 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:27:49.890466 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:41564.service - OpenSSH per-connection server daemon (10.0.0.1:41564). Sep 9 00:27:49.891276 systemd-logind[1555]: Removed session 3. Sep 9 00:27:49.951017 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:49.953453 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:49.964392 systemd-logind[1555]: New session 4 of user core. Sep 9 00:27:49.981029 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:27:50.040309 sshd[1740]: Connection closed by 10.0.0.1 port 41564 Sep 9 00:27:50.040629 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:50.056076 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:41564.service: Deactivated successfully. Sep 9 00:27:50.058051 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:27:50.058845 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:27:50.061836 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Sep 9 00:27:50.062640 systemd-logind[1555]: Removed session 4. Sep 9 00:27:50.140503 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:50.142557 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:50.151729 systemd-logind[1555]: New session 5 of user core. Sep 9 00:27:50.161924 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:27:50.309861 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:27:50.310179 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:27:50.326367 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 9 00:27:50.329328 sshd[1749]: Connection closed by 10.0.0.1 port 57720 Sep 9 00:27:50.329997 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:50.339984 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:57720.service: Deactivated successfully. Sep 9 00:27:50.342090 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:27:50.342963 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:27:50.346085 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:57734.service - OpenSSH per-connection server daemon (10.0.0.1:57734). Sep 9 00:27:50.346891 systemd-logind[1555]: Removed session 5. Sep 9 00:27:50.404000 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 57734 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:50.405614 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:50.410377 systemd-logind[1555]: New session 6 of user core. Sep 9 00:27:50.419743 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:27:50.477724 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:27:50.478106 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:27:50.710998 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 9 00:27:50.718977 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:27:50.719338 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:27:50.731322 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:27:50.792280 augenrules[1783]: No rules Sep 9 00:27:50.793470 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:27:50.793850 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:27:50.795539 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 9 00:27:50.797515 sshd[1759]: Connection closed by 10.0.0.1 port 57734 Sep 9 00:27:50.797901 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:50.812922 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:57734.service: Deactivated successfully. Sep 9 00:27:50.815491 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:27:50.816539 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:27:50.820045 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:57750.service - OpenSSH per-connection server daemon (10.0.0.1:57750). Sep 9 00:27:50.820903 systemd-logind[1555]: Removed session 6. Sep 9 00:27:50.885649 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 57750 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:27:50.887449 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:27:50.892404 systemd-logind[1555]: New session 7 of user core. Sep 9 00:27:50.901783 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:27:50.957707 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:27:50.958140 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:27:50.959342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:27:50.960939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:27:51.276278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:27:51.288234 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:27:51.483613 kubelet[1818]: E0909 00:27:51.483529 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:27:51.490819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:27:51.491056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:27:51.492222 systemd[1]: kubelet.service: Consumed 352ms CPU time, 112.1M memory peak. Sep 9 00:27:51.752892 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:27:51.772218 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:27:52.230928 dockerd[1832]: time="2025-09-09T00:27:52.230748764Z" level=info msg="Starting up" Sep 9 00:27:52.232328 dockerd[1832]: time="2025-09-09T00:27:52.232274625Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:27:52.279072 dockerd[1832]: time="2025-09-09T00:27:52.278990206Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 00:27:54.445146 dockerd[1832]: time="2025-09-09T00:27:54.444993672Z" level=info msg="Loading containers: start." Sep 9 00:27:54.859646 kernel: Initializing XFRM netlink socket Sep 9 00:27:56.234816 systemd-networkd[1479]: docker0: Link UP Sep 9 00:27:56.873196 dockerd[1832]: time="2025-09-09T00:27:56.873109238Z" level=info msg="Loading containers: done." Sep 9 00:27:56.891065 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck81272992-merged.mount: Deactivated successfully. Sep 9 00:27:57.046212 dockerd[1832]: time="2025-09-09T00:27:57.046098153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:27:57.046428 dockerd[1832]: time="2025-09-09T00:27:57.046243225Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 00:27:57.046428 dockerd[1832]: time="2025-09-09T00:27:57.046414095Z" level=info msg="Initializing buildkit" Sep 9 00:27:57.149883 dockerd[1832]: time="2025-09-09T00:27:57.149734030Z" level=info msg="Completed buildkit initialization" Sep 9 00:27:57.159400 dockerd[1832]: time="2025-09-09T00:27:57.157814210Z" level=info msg="Daemon has completed initialization" Sep 9 00:27:57.159400 dockerd[1832]: time="2025-09-09T00:27:57.158504134Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:27:57.158855 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:27:59.446947 containerd[1585]: time="2025-09-09T00:27:59.446849861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:28:00.926616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555646589.mount: Deactivated successfully. Sep 9 00:28:01.634684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:28:01.637211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:02.155897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:02.175056 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:28:02.386775 kubelet[2074]: E0909 00:28:02.386673 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:28:02.391378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:28:02.391575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:28:02.392074 systemd[1]: kubelet.service: Consumed 416ms CPU time, 111.1M memory peak. Sep 9 00:28:04.118767 containerd[1585]: time="2025-09-09T00:28:04.118661808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:04.119862 containerd[1585]: time="2025-09-09T00:28:04.119804801Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 00:28:04.121983 containerd[1585]: time="2025-09-09T00:28:04.121784082Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:04.126181 containerd[1585]: time="2025-09-09T00:28:04.126106747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:04.127800 containerd[1585]: time="2025-09-09T00:28:04.127688442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 4.680747631s" Sep 9 00:28:04.127856 containerd[1585]: time="2025-09-09T00:28:04.127802997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 00:28:04.128730 containerd[1585]: time="2025-09-09T00:28:04.128665184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:28:07.294090 containerd[1585]: time="2025-09-09T00:28:07.293960768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:07.295163 containerd[1585]: time="2025-09-09T00:28:07.295097135Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 00:28:07.296274 containerd[1585]: time="2025-09-09T00:28:07.296182793Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:07.299179 containerd[1585]: time="2025-09-09T00:28:07.299110671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:07.300310 containerd[1585]: time="2025-09-09T00:28:07.300276524Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 3.17156926s" Sep 9 00:28:07.300310 containerd[1585]: time="2025-09-09T00:28:07.300309807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 00:28:07.300848 containerd[1585]: time="2025-09-09T00:28:07.300794215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:28:10.742533 containerd[1585]: time="2025-09-09T00:28:10.742440014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:10.744535 containerd[1585]: time="2025-09-09T00:28:10.744493352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 00:28:10.747284 containerd[1585]: time="2025-09-09T00:28:10.747087452Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:10.752164 containerd[1585]: time="2025-09-09T00:28:10.752020294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:10.753429 containerd[1585]: time="2025-09-09T00:28:10.753365942Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 3.452526611s" Sep 9 00:28:10.753429 containerd[1585]: time="2025-09-09T00:28:10.753410206Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 00:28:10.754430 containerd[1585]: time="2025-09-09T00:28:10.754102326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:28:12.524109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:28:12.526295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:13.023914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142025087.mount: Deactivated successfully. Sep 9 00:28:13.025754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:13.054215 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:28:13.244751 kubelet[2143]: E0909 00:28:13.244462 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:28:13.249491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:28:13.249778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:28:13.250340 systemd[1]: kubelet.service: Consumed 281ms CPU time, 110.7M memory peak. Sep 9 00:28:15.241544 containerd[1585]: time="2025-09-09T00:28:15.241433720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.270878 containerd[1585]: time="2025-09-09T00:28:15.270750341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 00:28:15.334225 containerd[1585]: time="2025-09-09T00:28:15.334131499Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.361795 containerd[1585]: time="2025-09-09T00:28:15.361702626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.362821 containerd[1585]: time="2025-09-09T00:28:15.362754974Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 4.608589026s" Sep 9 00:28:15.362821 containerd[1585]: time="2025-09-09T00:28:15.362818134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 00:28:15.363510 containerd[1585]: time="2025-09-09T00:28:15.363476495Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:28:19.139001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466714100.mount: Deactivated successfully. Sep 9 00:28:21.200949 update_engine[1562]: I20250909 00:28:21.200744 1562 update_attempter.cc:509] Updating boot flags... Sep 9 00:28:23.384372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:28:23.386409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:23.634277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:23.656123 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:28:23.706868 kubelet[2233]: E0909 00:28:23.706780 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:28:23.711019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:28:23.711225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:28:23.711644 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.6M memory peak. Sep 9 00:28:24.759122 containerd[1585]: time="2025-09-09T00:28:24.759020789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.762639 containerd[1585]: time="2025-09-09T00:28:24.762612792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:28:24.764322 containerd[1585]: time="2025-09-09T00:28:24.764284740Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.767512 containerd[1585]: time="2025-09-09T00:28:24.767469544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.768727 containerd[1585]: time="2025-09-09T00:28:24.768675831Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 9.405166094s" Sep 9 00:28:24.768727 containerd[1585]: time="2025-09-09T00:28:24.768718072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:28:24.769345 containerd[1585]: time="2025-09-09T00:28:24.769307405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:28:25.778337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906702639.mount: Deactivated successfully. Sep 9 00:28:25.793102 containerd[1585]: time="2025-09-09T00:28:25.793010812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:28:25.796972 containerd[1585]: time="2025-09-09T00:28:25.796867261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:28:25.799078 containerd[1585]: time="2025-09-09T00:28:25.799023561Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:28:25.802841 containerd[1585]: time="2025-09-09T00:28:25.802654165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:28:25.803505 containerd[1585]: time="2025-09-09T00:28:25.803412225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.034068703s" Sep 9 00:28:25.803505 containerd[1585]: time="2025-09-09T00:28:25.803453153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:28:25.804492 containerd[1585]: time="2025-09-09T00:28:25.803984847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:28:28.173150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865011784.mount: Deactivated successfully. Sep 9 00:28:30.030474 containerd[1585]: time="2025-09-09T00:28:30.030399107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:30.031462 containerd[1585]: time="2025-09-09T00:28:30.031389372Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 00:28:30.032885 containerd[1585]: time="2025-09-09T00:28:30.032833924Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:30.037161 containerd[1585]: time="2025-09-09T00:28:30.037105472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:30.038411 containerd[1585]: time="2025-09-09T00:28:30.038337273Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.234315917s" Sep 9 00:28:30.038411 containerd[1585]: time="2025-09-09T00:28:30.038369703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 00:28:32.299651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:32.299825 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.6M memory peak. Sep 9 00:28:32.302153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:32.327129 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-7.scope)... Sep 9 00:28:32.327144 systemd[1]: Reloading... Sep 9 00:28:32.415654 zram_generator::config[2374]: No configuration found. Sep 9 00:28:32.990990 systemd[1]: Reloading finished in 663 ms. Sep 9 00:28:33.076999 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:28:33.077125 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:28:33.077563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:33.077649 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.2M memory peak. Sep 9 00:28:33.079849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:33.261449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:33.273111 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:28:33.331614 kubelet[2421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:28:33.331614 kubelet[2421]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:28:33.331614 kubelet[2421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:28:33.332085 kubelet[2421]: I0909 00:28:33.331709 2421 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:28:33.951450 kubelet[2421]: I0909 00:28:33.951384 2421 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:28:33.951450 kubelet[2421]: I0909 00:28:33.951421 2421 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:28:33.951817 kubelet[2421]: I0909 00:28:33.951790 2421 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:28:34.114113 kubelet[2421]: I0909 00:28:34.114051 2421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:28:34.114536 kubelet[2421]: E0909 00:28:34.114500 2421 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:34.123068 kubelet[2421]: I0909 00:28:34.123018 2421 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:28:34.129192 kubelet[2421]: I0909 00:28:34.129145 2421 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:28:34.129545 kubelet[2421]: I0909 00:28:34.129499 2421 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:28:34.129770 kubelet[2421]: I0909 00:28:34.129540 2421 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:28:34.129916 kubelet[2421]: I0909 00:28:34.129778 2421 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:28:34.129916 kubelet[2421]: I0909 00:28:34.129787 2421 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:28:34.129960 kubelet[2421]: I0909 00:28:34.129935 2421 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:28:34.133573 kubelet[2421]: I0909 00:28:34.133550 2421 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:28:34.133769 kubelet[2421]: I0909 00:28:34.133746 2421 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:28:34.133807 kubelet[2421]: I0909 00:28:34.133798 2421 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:28:34.133835 kubelet[2421]: I0909 00:28:34.133819 2421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:28:34.138629 kubelet[2421]: W0909 00:28:34.138535 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:34.138757 kubelet[2421]: E0909 00:28:34.138678 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:34.139234 kubelet[2421]: W0909 00:28:34.139195 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:34.139292 kubelet[2421]: E0909 00:28:34.139235 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:34.140391 kubelet[2421]: I0909 00:28:34.140367 2421 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:28:34.141114 kubelet[2421]: I0909 00:28:34.141058 2421 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:28:34.142011 kubelet[2421]: W0909 00:28:34.141976 2421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:28:34.151623 kubelet[2421]: I0909 00:28:34.151574 2421 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:28:34.151866 kubelet[2421]: I0909 00:28:34.151663 2421 server.go:1287] "Started kubelet" Sep 9 00:28:34.151866 kubelet[2421]: I0909 00:28:34.151742 2421 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:28:34.152954 kubelet[2421]: I0909 00:28:34.152929 2421 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:28:34.153864 kubelet[2421]: I0909 00:28:34.153821 2421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:28:34.154084 kubelet[2421]: I0909 00:28:34.154017 2421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:28:34.154340 kubelet[2421]: I0909 00:28:34.154311 2421 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:28:34.155622 kubelet[2421]: I0909 00:28:34.154739 2421 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:28:34.155622 kubelet[2421]: E0909 00:28:34.155020 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.155622 kubelet[2421]: I0909 00:28:34.155182 2421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:28:34.155622 kubelet[2421]: I0909 00:28:34.155478 2421 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:28:34.155622 kubelet[2421]: I0909 00:28:34.155560 2421 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:28:34.156718 kubelet[2421]: W0909 00:28:34.155929 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:34.156718 kubelet[2421]: E0909 00:28:34.155998 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:34.156718 kubelet[2421]: E0909 00:28:34.156077 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Sep 9 00:28:34.159710 kubelet[2421]: I0909 00:28:34.159662 2421 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:28:34.161020 kubelet[2421]: I0909 00:28:34.160993 2421 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:28:34.161020 kubelet[2421]: I0909 00:28:34.161014 2421 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:28:34.161196 kubelet[2421]: E0909 00:28:34.161119 2421 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:28:34.163096 kubelet[2421]: E0909 00:28:34.158896 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375b85431f3ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:28:34.151617452 +0000 UTC m=+0.873626609,LastTimestamp:2025-09-09 00:28:34.151617452 +0000 UTC m=+0.873626609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:28:34.179028 kubelet[2421]: I0909 00:28:34.178841 2421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:28:34.179184 kubelet[2421]: I0909 00:28:34.179104 2421 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:28:34.179184 kubelet[2421]: I0909 00:28:34.179114 2421 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:28:34.179184 kubelet[2421]: I0909 00:28:34.179131 2421 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:28:34.181026 kubelet[2421]: I0909 00:28:34.180978 2421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:28:34.181084 kubelet[2421]: I0909 00:28:34.181046 2421 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:28:34.183438 kubelet[2421]: I0909 00:28:34.181084 2421 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:28:34.183438 kubelet[2421]: I0909 00:28:34.181098 2421 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:28:34.183438 kubelet[2421]: E0909 00:28:34.181178 2421 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:28:34.183438 kubelet[2421]: W0909 00:28:34.181868 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:34.183438 kubelet[2421]: E0909 00:28:34.181920 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:34.255812 kubelet[2421]: E0909 00:28:34.255662 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.281887 kubelet[2421]: E0909 00:28:34.281834 2421 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:28:34.356849 kubelet[2421]: E0909 00:28:34.356790 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.357320 kubelet[2421]: E0909 00:28:34.357108 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Sep 9 00:28:34.457662 kubelet[2421]: E0909 00:28:34.457552 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.482774 kubelet[2421]: E0909 00:28:34.482729 2421 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:28:34.558405 kubelet[2421]: E0909 00:28:34.558237 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.658412 kubelet[2421]: E0909 00:28:34.658331 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.758231 kubelet[2421]: E0909 00:28:34.758169 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Sep 9 00:28:34.759214 kubelet[2421]: E0909 00:28:34.759162 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.860030 kubelet[2421]: E0909 00:28:34.859857 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:34.883150 kubelet[2421]: E0909 00:28:34.883071 2421 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:28:34.960901 kubelet[2421]: E0909 00:28:34.960837 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:35.032039 kubelet[2421]: W0909 00:28:35.031983 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:35.032039 kubelet[2421]: E0909 00:28:35.032044 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:35.061797 kubelet[2421]: E0909 00:28:35.061753 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:35.127863 kubelet[2421]: W0909 00:28:35.127734 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:35.127863 kubelet[2421]: E0909 00:28:35.127778 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:35.149265 kubelet[2421]: W0909 00:28:35.149206 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:35.149265 kubelet[2421]: E0909 00:28:35.149248 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:35.162126 kubelet[2421]: E0909 00:28:35.162049 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:35.162896 kubelet[2421]: I0909 00:28:35.162804 2421 policy_none.go:49] "None policy: Start" Sep 9 00:28:35.162896 kubelet[2421]: I0909 00:28:35.162850 2421 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:28:35.162896 kubelet[2421]: I0909 00:28:35.162869 2421 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:28:35.186136 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:28:35.203933 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:28:35.209146 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:28:35.223874 kubelet[2421]: I0909 00:28:35.223837 2421 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:28:35.224201 kubelet[2421]: I0909 00:28:35.224167 2421 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:28:35.224201 kubelet[2421]: I0909 00:28:35.224188 2421 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:28:35.224776 kubelet[2421]: I0909 00:28:35.224584 2421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:28:35.226110 kubelet[2421]: E0909 00:28:35.226081 2421 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:28:35.226164 kubelet[2421]: E0909 00:28:35.226151 2421 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:28:35.328517 kubelet[2421]: I0909 00:28:35.328423 2421 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:35.328839 kubelet[2421]: E0909 00:28:35.328805 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 9 00:28:35.382165 kubelet[2421]: E0909 00:28:35.381976 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375b85431f3ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:28:34.151617452 +0000 UTC m=+0.873626609,LastTimestamp:2025-09-09 00:28:34.151617452 +0000 UTC m=+0.873626609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:28:35.531206 kubelet[2421]: I0909 00:28:35.531149 2421 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:35.531535 kubelet[2421]: E0909 00:28:35.531486 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 9 00:28:35.559559 kubelet[2421]: E0909 00:28:35.559480 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Sep 9 00:28:35.693690 systemd[1]: Created slice kubepods-burstable-podddbdc62b65a7264ca5e4f0b21fb0e747.slice - libcontainer container kubepods-burstable-podddbdc62b65a7264ca5e4f0b21fb0e747.slice. Sep 9 00:28:35.698090 kubelet[2421]: W0909 00:28:35.697962 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:35.698090 kubelet[2421]: E0909 00:28:35.698062 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:35.710465 kubelet[2421]: E0909 00:28:35.710377 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:35.714241 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:28:35.726405 kubelet[2421]: E0909 00:28:35.726360 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:35.729357 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:28:35.731265 kubelet[2421]: E0909 00:28:35.731242 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:35.763827 kubelet[2421]: I0909 00:28:35.763752 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:35.763827 kubelet[2421]: I0909 00:28:35.763809 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:35.763827 kubelet[2421]: I0909 00:28:35.763839 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:35.764012 kubelet[2421]: I0909 00:28:35.763858 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:35.764012 kubelet[2421]: I0909 00:28:35.763877 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:35.764012 kubelet[2421]: I0909 00:28:35.763921 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:35.764012 kubelet[2421]: I0909 00:28:35.763957 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:35.764114 kubelet[2421]: I0909 00:28:35.764008 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:35.764114 kubelet[2421]: I0909 00:28:35.764041 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:35.933845 kubelet[2421]: I0909 00:28:35.933806 2421 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:35.934206 kubelet[2421]: E0909 00:28:35.934163 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 9 00:28:36.011900 kubelet[2421]: E0909 00:28:36.011708 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.012808 containerd[1585]: time="2025-09-09T00:28:36.012741783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddbdc62b65a7264ca5e4f0b21fb0e747,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:36.027217 kubelet[2421]: E0909 00:28:36.027159 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.027871 containerd[1585]: time="2025-09-09T00:28:36.027812233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:36.032190 kubelet[2421]: E0909 00:28:36.032131 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.032674 containerd[1585]: time="2025-09-09T00:28:36.032623214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:36.171252 kubelet[2421]: E0909 00:28:36.171196 2421 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:36.645560 containerd[1585]: time="2025-09-09T00:28:36.645499123Z" level=info msg="connecting to shim 684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2" address="unix:///run/containerd/s/5d3797feee0915380837947e46ea79e7d8aecc75f6a64874c2b42ed36a80e23f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:36.688874 systemd[1]: Started cri-containerd-684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2.scope - libcontainer container 684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2. Sep 9 00:28:36.728551 containerd[1585]: time="2025-09-09T00:28:36.728482339Z" level=info msg="connecting to shim 660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3" address="unix:///run/containerd/s/1fb203bf5908b33b6e22fd114ca97556c17ff73ac879cd1aa073829aef587c4a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:36.735787 kubelet[2421]: I0909 00:28:36.735755 2421 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:36.736242 kubelet[2421]: E0909 00:28:36.736205 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Sep 9 00:28:36.776060 systemd[1]: Started cri-containerd-660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3.scope - libcontainer container 660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3. Sep 9 00:28:36.783631 containerd[1585]: time="2025-09-09T00:28:36.783513662Z" level=info msg="connecting to shim 842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3" address="unix:///run/containerd/s/d438b6a84bbb9b2a4efd1260d9b5cee082a390a33a86f6e949c4d858c35ab454" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:36.798872 containerd[1585]: time="2025-09-09T00:28:36.798809236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddbdc62b65a7264ca5e4f0b21fb0e747,Namespace:kube-system,Attempt:0,} returns sandbox id \"684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2\"" Sep 9 00:28:36.802611 kubelet[2421]: E0909 00:28:36.802551 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.805208 containerd[1585]: time="2025-09-09T00:28:36.805166886Z" level=info msg="CreateContainer within sandbox \"684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:28:36.812241 kubelet[2421]: W0909 00:28:36.812166 2421 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Sep 9 00:28:36.812336 kubelet[2421]: E0909 00:28:36.812249 2421 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:28:36.834783 systemd[1]: Started cri-containerd-842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3.scope - libcontainer container 842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3. Sep 9 00:28:36.889803 containerd[1585]: time="2025-09-09T00:28:36.889737178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3\"" Sep 9 00:28:36.890811 kubelet[2421]: E0909 00:28:36.890781 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.892449 containerd[1585]: time="2025-09-09T00:28:36.892420054Z" level=info msg="CreateContainer within sandbox \"660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:28:36.906585 containerd[1585]: time="2025-09-09T00:28:36.906457822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3\"" Sep 9 00:28:36.907368 kubelet[2421]: E0909 00:28:36.907325 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:36.911470 containerd[1585]: time="2025-09-09T00:28:36.911424025Z" level=info msg="CreateContainer within sandbox \"842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:28:36.912957 containerd[1585]: time="2025-09-09T00:28:36.912914298Z" level=info msg="Container 1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:36.922955 containerd[1585]: time="2025-09-09T00:28:36.922899823Z" level=info msg="Container d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:36.933372 containerd[1585]: time="2025-09-09T00:28:36.933318221Z" level=info msg="CreateContainer within sandbox \"684e26e0503bc129595d727162309c6e359c39002051024c8f83bedaf4b1efd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d\"" Sep 9 00:28:36.934141 containerd[1585]: time="2025-09-09T00:28:36.934083761Z" level=info msg="Container 031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:36.934208 containerd[1585]: time="2025-09-09T00:28:36.934095994Z" level=info msg="StartContainer for \"1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d\"" Sep 9 00:28:36.935393 containerd[1585]: time="2025-09-09T00:28:36.935348871Z" level=info msg="connecting to shim 1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d" address="unix:///run/containerd/s/5d3797feee0915380837947e46ea79e7d8aecc75f6a64874c2b42ed36a80e23f" protocol=ttrpc version=3 Sep 9 00:28:36.937817 containerd[1585]: time="2025-09-09T00:28:36.937770477Z" level=info msg="CreateContainer within sandbox \"660a09f8003e11fe232fcc52ae1b93ac117562a40628173a19648cd443a27ad3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906\"" Sep 9 00:28:36.938368 containerd[1585]: time="2025-09-09T00:28:36.938336391Z" level=info msg="StartContainer for \"d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906\"" Sep 9 00:28:36.939753 containerd[1585]: time="2025-09-09T00:28:36.939729001Z" level=info msg="connecting to shim d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906" address="unix:///run/containerd/s/1fb203bf5908b33b6e22fd114ca97556c17ff73ac879cd1aa073829aef587c4a" protocol=ttrpc version=3 Sep 9 00:28:36.941942 containerd[1585]: time="2025-09-09T00:28:36.941898782Z" level=info msg="CreateContainer within sandbox \"842c2afa3d0b3175c190cab67491e00c77c3902e39d5d5ce41dcf7d9713d22b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf\"" Sep 9 00:28:36.942669 containerd[1585]: time="2025-09-09T00:28:36.942303194Z" level=info msg="StartContainer for \"031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf\"" Sep 9 00:28:36.943260 containerd[1585]: time="2025-09-09T00:28:36.943222644Z" level=info msg="connecting to shim 031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf" address="unix:///run/containerd/s/d438b6a84bbb9b2a4efd1260d9b5cee082a390a33a86f6e949c4d858c35ab454" protocol=ttrpc version=3 Sep 9 00:28:36.959806 systemd[1]: Started cri-containerd-1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d.scope - libcontainer container 1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d. Sep 9 00:28:36.977903 systemd[1]: Started cri-containerd-d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906.scope - libcontainer container d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906. Sep 9 00:28:36.981494 systemd[1]: Started cri-containerd-031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf.scope - libcontainer container 031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf. Sep 9 00:28:37.067753 containerd[1585]: time="2025-09-09T00:28:37.067685083Z" level=info msg="StartContainer for \"1178c49b1707cb2fd1a4a2c48bc0a884ff799dfe4cab2ff5a57cf1c25c52bd0d\" returns successfully" Sep 9 00:28:37.069369 containerd[1585]: time="2025-09-09T00:28:37.069329577Z" level=info msg="StartContainer for \"031f5cb1931b58b20abb9affc4f47568c60a2d5665209aeefdb108dceb538faf\" returns successfully" Sep 9 00:28:37.073710 containerd[1585]: time="2025-09-09T00:28:37.073511161Z" level=info msg="StartContainer for \"d852e56b7c5d2641105b9859a6c000879c1abf57fc32a5f6a849af2f25683906\" returns successfully" Sep 9 00:28:37.194713 kubelet[2421]: E0909 00:28:37.194369 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:37.194713 kubelet[2421]: E0909 00:28:37.194501 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:37.199335 kubelet[2421]: E0909 00:28:37.199278 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:37.199730 kubelet[2421]: E0909 00:28:37.199495 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:37.208669 kubelet[2421]: E0909 00:28:37.208634 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:37.208805 kubelet[2421]: E0909 00:28:37.208782 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:38.208232 kubelet[2421]: E0909 00:28:38.208183 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:38.209227 kubelet[2421]: E0909 00:28:38.209192 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:38.209345 kubelet[2421]: E0909 00:28:38.208930 2421 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:28:38.209490 kubelet[2421]: E0909 00:28:38.209474 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:38.339942 kubelet[2421]: I0909 00:28:38.339882 2421 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:38.501532 kubelet[2421]: E0909 00:28:38.501035 2421 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:28:38.703990 kubelet[2421]: I0909 00:28:38.703902 2421 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:28:38.703990 kubelet[2421]: E0909 00:28:38.703967 2421 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:28:38.724479 kubelet[2421]: E0909 00:28:38.724419 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:38.824961 kubelet[2421]: E0909 00:28:38.824804 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:38.925034 kubelet[2421]: E0909 00:28:38.924969 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.025969 kubelet[2421]: E0909 00:28:39.025894 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.127100 kubelet[2421]: E0909 00:28:39.126933 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.228150 kubelet[2421]: E0909 00:28:39.228039 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.329805 kubelet[2421]: E0909 00:28:39.328870 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.429821 kubelet[2421]: E0909 00:28:39.429744 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.530484 kubelet[2421]: E0909 00:28:39.530414 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.631441 kubelet[2421]: E0909 00:28:39.631389 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.732639 kubelet[2421]: E0909 00:28:39.732408 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.833080 kubelet[2421]: E0909 00:28:39.833005 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:39.933654 kubelet[2421]: E0909 00:28:39.933605 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:40.034073 kubelet[2421]: E0909 00:28:40.033914 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:40.134839 kubelet[2421]: E0909 00:28:40.134737 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:40.235196 kubelet[2421]: E0909 00:28:40.235125 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:40.355944 kubelet[2421]: I0909 00:28:40.355791 2421 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:40.368580 kubelet[2421]: I0909 00:28:40.368496 2421 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:40.375311 kubelet[2421]: I0909 00:28:40.375206 2421 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:40.909295 systemd[1]: Reload requested from client PID 2695 ('systemctl') (unit session-7.scope)... Sep 9 00:28:40.909317 systemd[1]: Reloading... Sep 9 00:28:40.981817 kubelet[2421]: I0909 00:28:40.981770 2421 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:41.023663 zram_generator::config[2738]: No configuration found. Sep 9 00:28:41.142815 kubelet[2421]: I0909 00:28:41.142737 2421 apiserver.go:52] "Watching apiserver" Sep 9 00:28:41.145077 kubelet[2421]: E0909 00:28:41.145031 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:41.145325 kubelet[2421]: E0909 00:28:41.145285 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:41.156609 kubelet[2421]: I0909 00:28:41.156546 2421 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:28:41.304181 kubelet[2421]: E0909 00:28:41.304053 2421 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:41.304660 kubelet[2421]: E0909 00:28:41.304325 2421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:41.785491 systemd[1]: Reloading finished in 875 ms. Sep 9 00:28:41.816344 kubelet[2421]: I0909 00:28:41.816277 2421 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:28:41.816549 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:41.826487 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:28:41.826854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:41.826913 systemd[1]: kubelet.service: Consumed 1.252s CPU time, 132.4M memory peak. Sep 9 00:28:41.829215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:28:42.053508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:28:42.070974 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:28:42.147840 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:28:42.147840 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:28:42.147840 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:28:42.148287 kubelet[2783]: I0909 00:28:42.147899 2783 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:28:42.157277 kubelet[2783]: I0909 00:28:42.157205 2783 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:28:42.157277 kubelet[2783]: I0909 00:28:42.157250 2783 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:28:42.157653 kubelet[2783]: I0909 00:28:42.157622 2783 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:28:42.159312 kubelet[2783]: I0909 00:28:42.159277 2783 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:28:42.162140 kubelet[2783]: I0909 00:28:42.162056 2783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:28:42.166278 kubelet[2783]: I0909 00:28:42.166243 2783 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:28:42.173043 kubelet[2783]: I0909 00:28:42.172989 2783 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:28:42.173350 kubelet[2783]: I0909 00:28:42.173306 2783 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:28:42.174007 kubelet[2783]: I0909 00:28:42.173434 2783 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:28:42.174007 kubelet[2783]: I0909 00:28:42.173956 2783 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:28:42.174007 kubelet[2783]: I0909 00:28:42.173969 2783 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:28:42.174216 kubelet[2783]: I0909 00:28:42.174038 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:28:42.174538 kubelet[2783]: I0909 00:28:42.174516 2783 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:28:42.174617 kubelet[2783]: I0909 00:28:42.174549 2783 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:28:42.174617 kubelet[2783]: I0909 00:28:42.174576 2783 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:28:42.174617 kubelet[2783]: I0909 00:28:42.174614 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:28:42.175865 kubelet[2783]: I0909 00:28:42.175835 2783 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:28:42.176407 kubelet[2783]: I0909 00:28:42.176362 2783 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:28:42.177576 kubelet[2783]: I0909 00:28:42.177536 2783 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:28:42.177576 kubelet[2783]: I0909 00:28:42.177579 2783 server.go:1287] "Started kubelet" Sep 9 00:28:42.182372 kubelet[2783]: I0909 00:28:42.182287 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:28:42.183613 kubelet[2783]: I0909 00:28:42.182634 2783 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:28:42.183613 kubelet[2783]: I0909 00:28:42.182692 2783 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:28:42.183839 kubelet[2783]: I0909 00:28:42.183808 2783 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:28:42.187114 kubelet[2783]: I0909 00:28:42.187083 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:28:42.187356 kubelet[2783]: I0909 00:28:42.187321 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:28:42.188961 kubelet[2783]: E0909 00:28:42.188763 2783 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:28:42.188961 kubelet[2783]: I0909 00:28:42.188851 2783 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:28:42.189062 kubelet[2783]: I0909 00:28:42.189028 2783 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:28:42.189371 kubelet[2783]: I0909 00:28:42.189176 2783 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:28:42.190529 kubelet[2783]: I0909 00:28:42.190501 2783 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:28:42.190651 kubelet[2783]: I0909 00:28:42.190624 2783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:28:42.192325 kubelet[2783]: I0909 00:28:42.192278 2783 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:28:42.193686 kubelet[2783]: E0909 00:28:42.192994 2783 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:28:42.204397 kubelet[2783]: I0909 00:28:42.204330 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:28:42.206113 kubelet[2783]: I0909 00:28:42.206074 2783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:28:42.206193 kubelet[2783]: I0909 00:28:42.206120 2783 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:28:42.206193 kubelet[2783]: I0909 00:28:42.206147 2783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:28:42.206193 kubelet[2783]: I0909 00:28:42.206155 2783 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:28:42.206302 kubelet[2783]: E0909 00:28:42.206222 2783 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:28:42.243924 kubelet[2783]: I0909 00:28:42.243876 2783 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:28:42.243924 kubelet[2783]: I0909 00:28:42.243897 2783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:28:42.243924 kubelet[2783]: I0909 00:28:42.243920 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:28:42.244156 kubelet[2783]: I0909 00:28:42.244094 2783 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:28:42.244156 kubelet[2783]: I0909 00:28:42.244105 2783 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:28:42.244156 kubelet[2783]: I0909 00:28:42.244124 2783 policy_none.go:49] "None policy: Start" Sep 9 00:28:42.244156 kubelet[2783]: I0909 00:28:42.244134 2783 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:28:42.244156 kubelet[2783]: I0909 00:28:42.244144 2783 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:28:42.244301 kubelet[2783]: I0909 00:28:42.244238 2783 state_mem.go:75] "Updated machine memory state" Sep 9 00:28:42.248883 kubelet[2783]: I0909 00:28:42.248854 2783 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:28:42.249062 kubelet[2783]: I0909 00:28:42.249041 2783 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:28:42.249277 kubelet[2783]: I0909 00:28:42.249061 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:28:42.249277 kubelet[2783]: I0909 00:28:42.249243 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:28:42.250075 kubelet[2783]: E0909 00:28:42.250015 2783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:28:42.307396 kubelet[2783]: I0909 00:28:42.307227 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:42.307661 kubelet[2783]: I0909 00:28:42.307227 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:42.309067 kubelet[2783]: I0909 00:28:42.308995 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.354881 kubelet[2783]: I0909 00:28:42.354826 2783 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:28:42.389713 kubelet[2783]: I0909 00:28:42.389522 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.389713 kubelet[2783]: I0909 00:28:42.389579 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.389713 kubelet[2783]: I0909 00:28:42.389633 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.389713 kubelet[2783]: I0909 00:28:42.389658 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.389713 kubelet[2783]: I0909 00:28:42.389682 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.389990 kubelet[2783]: I0909 00:28:42.389718 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:42.389990 kubelet[2783]: I0909 00:28:42.389740 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:42.389990 kubelet[2783]: I0909 00:28:42.389760 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:42.389990 kubelet[2783]: I0909 00:28:42.389778 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddbdc62b65a7264ca5e4f0b21fb0e747-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddbdc62b65a7264ca5e4f0b21fb0e747\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:42.851076 kubelet[2783]: E0909 00:28:42.850912 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:42.851275 kubelet[2783]: E0909 00:28:42.851201 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:42.852044 kubelet[2783]: E0909 00:28:42.851467 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:42.852737 kubelet[2783]: E0909 00:28:42.852687 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:42.854296 kubelet[2783]: E0909 00:28:42.852881 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:28:42.854296 kubelet[2783]: E0909 00:28:42.854170 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:43.165337 kubelet[2783]: I0909 00:28:43.165274 2783 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:28:43.166696 kubelet[2783]: I0909 00:28:43.166312 2783 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:28:43.680421 kubelet[2783]: I0909 00:28:43.678550 2783 apiserver.go:52] "Watching apiserver" Sep 9 00:28:43.682076 kubelet[2783]: I0909 00:28:43.681982 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:43.682262 kubelet[2783]: I0909 00:28:43.682207 2783 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:43.693034 kubelet[2783]: E0909 00:28:43.693000 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:43.789583 kubelet[2783]: I0909 00:28:43.789513 2783 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:28:43.827452 kubelet[2783]: E0909 00:28:43.827363 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:28:43.827719 kubelet[2783]: E0909 00:28:43.827363 2783 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:28:43.827811 kubelet[2783]: E0909 00:28:43.827776 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:43.828744 kubelet[2783]: E0909 00:28:43.828405 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:43.856630 kubelet[2783]: I0909 00:28:43.856449 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8564247849999997 podStartE2EDuration="3.856424785s" podCreationTimestamp="2025-09-09 00:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:43.855643667 +0000 UTC m=+1.760015255" watchObservedRunningTime="2025-09-09 00:28:43.856424785 +0000 UTC m=+1.760796373" Sep 9 00:28:43.882634 kubelet[2783]: I0909 00:28:43.882094 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.882063438 podStartE2EDuration="3.882063438s" podCreationTimestamp="2025-09-09 00:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:43.881975423 +0000 UTC m=+1.786347011" watchObservedRunningTime="2025-09-09 00:28:43.882063438 +0000 UTC m=+1.786435026" Sep 9 00:28:43.903387 sudo[2818]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:28:43.904380 sudo[2818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:28:44.337778 sudo[2818]: pam_unix(sudo:session): session closed for user root Sep 9 00:28:44.683725 kubelet[2783]: E0909 00:28:44.683623 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:44.684649 kubelet[2783]: E0909 00:28:44.684554 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:44.798134 kubelet[2783]: I0909 00:28:44.797980 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.797958186 podStartE2EDuration="4.797958186s" podCreationTimestamp="2025-09-09 00:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:43.894964218 +0000 UTC m=+1.799335806" watchObservedRunningTime="2025-09-09 00:28:44.797958186 +0000 UTC m=+2.702329774" Sep 9 00:28:45.441644 kubelet[2783]: I0909 00:28:45.441571 2783 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:28:45.442150 containerd[1585]: time="2025-09-09T00:28:45.442107539Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:28:45.442563 kubelet[2783]: I0909 00:28:45.442354 2783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:28:46.343002 systemd[1]: Created slice kubepods-besteffort-pod93b580d5_dcdb_42b4_9841_f85f5a76f609.slice - libcontainer container kubepods-besteffort-pod93b580d5_dcdb_42b4_9841_f85f5a76f609.slice. Sep 9 00:28:46.390719 systemd[1]: Created slice kubepods-burstable-pod902923d8_9055_4891_9346_e5e9a8cef271.slice - libcontainer container kubepods-burstable-pod902923d8_9055_4891_9346_e5e9a8cef271.slice. Sep 9 00:28:46.498931 kubelet[2783]: I0909 00:28:46.498772 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93b580d5-dcdb-42b4-9841-f85f5a76f609-kube-proxy\") pod \"kube-proxy-nnccj\" (UID: \"93b580d5-dcdb-42b4-9841-f85f5a76f609\") " pod="kube-system/kube-proxy-nnccj" Sep 9 00:28:46.498931 kubelet[2783]: I0909 00:28:46.498893 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-hostproc\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.498931 kubelet[2783]: I0909 00:28:46.498923 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsdf\" (UniqueName: \"kubernetes.io/projected/93b580d5-dcdb-42b4-9841-f85f5a76f609-kube-api-access-cnsdf\") pod \"kube-proxy-nnccj\" (UID: \"93b580d5-dcdb-42b4-9841-f85f5a76f609\") " pod="kube-system/kube-proxy-nnccj" Sep 9 00:28:46.498931 kubelet[2783]: I0909 00:28:46.498948 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-etc-cni-netd\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499016 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-kernel\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499076 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-hubble-tls\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499100 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cni-path\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499174 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/902923d8-9055-4891-9346-e5e9a8cef271-clustermesh-secrets\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499195 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/902923d8-9055-4891-9346-e5e9a8cef271-cilium-config-path\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.499702 kubelet[2783]: I0909 00:28:46.499217 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-net\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499238 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-run\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499264 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-cgroup\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499326 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-xtables-lock\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499381 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93b580d5-dcdb-42b4-9841-f85f5a76f609-lib-modules\") pod \"kube-proxy-nnccj\" (UID: \"93b580d5-dcdb-42b4-9841-f85f5a76f609\") " pod="kube-system/kube-proxy-nnccj" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499417 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsf26\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-kube-api-access-hsf26\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500338 kubelet[2783]: I0909 00:28:46.499440 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93b580d5-dcdb-42b4-9841-f85f5a76f609-xtables-lock\") pod \"kube-proxy-nnccj\" (UID: \"93b580d5-dcdb-42b4-9841-f85f5a76f609\") " pod="kube-system/kube-proxy-nnccj" Sep 9 00:28:46.500727 kubelet[2783]: I0909 00:28:46.499480 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-bpf-maps\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.500727 kubelet[2783]: I0909 00:28:46.499508 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-lib-modules\") pod \"cilium-kj9wx\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " pod="kube-system/cilium-kj9wx" Sep 9 00:28:46.514366 systemd[1]: Created slice kubepods-besteffort-pod333c568a_1b02_486b_82d0_8e6f2887b470.slice - libcontainer container kubepods-besteffort-pod333c568a_1b02_486b_82d0_8e6f2887b470.slice. Sep 9 00:28:46.602305 kubelet[2783]: I0909 00:28:46.600136 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p7dz\" (UniqueName: \"kubernetes.io/projected/333c568a-1b02-486b-82d0-8e6f2887b470-kube-api-access-4p7dz\") pod \"cilium-operator-6c4d7847fc-s62sr\" (UID: \"333c568a-1b02-486b-82d0-8e6f2887b470\") " pod="kube-system/cilium-operator-6c4d7847fc-s62sr" Sep 9 00:28:46.602305 kubelet[2783]: I0909 00:28:46.600288 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/333c568a-1b02-486b-82d0-8e6f2887b470-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s62sr\" (UID: \"333c568a-1b02-486b-82d0-8e6f2887b470\") " pod="kube-system/cilium-operator-6c4d7847fc-s62sr" Sep 9 00:28:46.644210 sudo[1796]: pam_unix(sudo:session): session closed for user root Sep 9 00:28:46.650303 sshd[1795]: Connection closed by 10.0.0.1 port 57750 Sep 9 00:28:46.648740 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:46.659756 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:57750.service: Deactivated successfully. Sep 9 00:28:46.667807 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:28:46.668299 systemd[1]: session-7.scope: Consumed 5.483s CPU time, 259.7M memory peak. Sep 9 00:28:46.669873 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:28:46.672313 systemd-logind[1555]: Removed session 7. Sep 9 00:28:46.690440 kubelet[2783]: E0909 00:28:46.690362 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:46.692345 containerd[1585]: time="2025-09-09T00:28:46.692297485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnccj,Uid:93b580d5-dcdb-42b4-9841-f85f5a76f609,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:46.695205 kubelet[2783]: E0909 00:28:46.695163 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:46.695816 containerd[1585]: time="2025-09-09T00:28:46.695769897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj9wx,Uid:902923d8-9055-4891-9346-e5e9a8cef271,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:46.818014 kubelet[2783]: E0909 00:28:46.817899 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:46.818838 containerd[1585]: time="2025-09-09T00:28:46.818651409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s62sr,Uid:333c568a-1b02-486b-82d0-8e6f2887b470,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:48.226476 containerd[1585]: time="2025-09-09T00:28:48.226416026Z" level=info msg="connecting to shim 0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43" address="unix:///run/containerd/s/1e2f09b15aff7fb0b2b8ddf8f446179e4f01bab235a0d4c56c9ce3a0e633ac47" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:48.241939 containerd[1585]: time="2025-09-09T00:28:48.241869797Z" level=info msg="connecting to shim 91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:48.243274 containerd[1585]: time="2025-09-09T00:28:48.243197158Z" level=info msg="connecting to shim ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a" address="unix:///run/containerd/s/0c3adfbbc57fa6a6ddfb7e7c53910e5fa99c45628dea827da7eee6ce74e343f4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:48.262821 systemd[1]: Started cri-containerd-0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43.scope - libcontainer container 0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43. Sep 9 00:28:48.291830 systemd[1]: Started cri-containerd-ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a.scope - libcontainer container ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a. Sep 9 00:28:48.298052 systemd[1]: Started cri-containerd-91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6.scope - libcontainer container 91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6. Sep 9 00:28:48.349046 containerd[1585]: time="2025-09-09T00:28:48.348961829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnccj,Uid:93b580d5-dcdb-42b4-9841-f85f5a76f609,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43\"" Sep 9 00:28:48.350468 kubelet[2783]: E0909 00:28:48.350419 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:48.353622 containerd[1585]: time="2025-09-09T00:28:48.353504098Z" level=info msg="CreateContainer within sandbox \"0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:28:48.357738 containerd[1585]: time="2025-09-09T00:28:48.357579762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kj9wx,Uid:902923d8-9055-4891-9346-e5e9a8cef271,Namespace:kube-system,Attempt:0,} returns sandbox id \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\"" Sep 9 00:28:48.358305 kubelet[2783]: E0909 00:28:48.358265 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:48.359417 containerd[1585]: time="2025-09-09T00:28:48.359109685Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:28:48.619491 containerd[1585]: time="2025-09-09T00:28:48.619282089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s62sr,Uid:333c568a-1b02-486b-82d0-8e6f2887b470,Namespace:kube-system,Attempt:0,} returns sandbox id \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\"" Sep 9 00:28:48.620608 kubelet[2783]: E0909 00:28:48.620533 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:49.244388 containerd[1585]: time="2025-09-09T00:28:49.244294984Z" level=info msg="Container 304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:49.403729 containerd[1585]: time="2025-09-09T00:28:49.403663867Z" level=info msg="CreateContainer within sandbox \"0e0e46baa5f58c57a5132da5eb4164605a4f888ec1764814ebb129a09ddb2d43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416\"" Sep 9 00:28:49.404473 containerd[1585]: time="2025-09-09T00:28:49.404417953Z" level=info msg="StartContainer for \"304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416\"" Sep 9 00:28:49.406229 containerd[1585]: time="2025-09-09T00:28:49.406198305Z" level=info msg="connecting to shim 304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416" address="unix:///run/containerd/s/1e2f09b15aff7fb0b2b8ddf8f446179e4f01bab235a0d4c56c9ce3a0e633ac47" protocol=ttrpc version=3 Sep 9 00:28:49.433979 systemd[1]: Started cri-containerd-304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416.scope - libcontainer container 304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416. Sep 9 00:28:49.745516 containerd[1585]: time="2025-09-09T00:28:49.745447333Z" level=info msg="StartContainer for \"304a1811b68d4660601091e85e285336c31ff9e569b8331544a0168d42449416\" returns successfully" Sep 9 00:28:49.782383 kubelet[2783]: E0909 00:28:49.782315 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:49.832068 kubelet[2783]: I0909 00:28:49.831942 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nnccj" podStartSLOduration=3.831918327 podStartE2EDuration="3.831918327s" podCreationTimestamp="2025-09-09 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:49.831811717 +0000 UTC m=+7.736183325" watchObservedRunningTime="2025-09-09 00:28:49.831918327 +0000 UTC m=+7.736289915" Sep 9 00:28:49.836812 kubelet[2783]: E0909 00:28:49.835653 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:50.784625 kubelet[2783]: E0909 00:28:50.784560 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:50.785398 kubelet[2783]: E0909 00:28:50.784791 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:52.145633 kubelet[2783]: E0909 00:28:52.145511 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:52.788686 kubelet[2783]: E0909 00:28:52.788648 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:53.102774 kubelet[2783]: E0909 00:28:53.102547 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:03.417501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376543493.mount: Deactivated successfully. Sep 9 00:29:06.490091 containerd[1585]: time="2025-09-09T00:29:06.489989239Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:29:06.491130 containerd[1585]: time="2025-09-09T00:29:06.491088522Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:29:06.493195 containerd[1585]: time="2025-09-09T00:29:06.493147574Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:29:06.495919 containerd[1585]: time="2025-09-09T00:29:06.495830637Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.136686537s" Sep 9 00:29:06.495919 containerd[1585]: time="2025-09-09T00:29:06.495916248Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:29:06.500306 containerd[1585]: time="2025-09-09T00:29:06.500246330Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:29:06.501924 containerd[1585]: time="2025-09-09T00:29:06.501871941Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:29:06.516310 containerd[1585]: time="2025-09-09T00:29:06.516235564Z" level=info msg="Container 65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:06.520387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937163282.mount: Deactivated successfully. Sep 9 00:29:06.530143 containerd[1585]: time="2025-09-09T00:29:06.530072982Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\"" Sep 9 00:29:06.530825 containerd[1585]: time="2025-09-09T00:29:06.530767254Z" level=info msg="StartContainer for \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\"" Sep 9 00:29:06.532059 containerd[1585]: time="2025-09-09T00:29:06.531788450Z" level=info msg="connecting to shim 65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" protocol=ttrpc version=3 Sep 9 00:29:06.568893 systemd[1]: Started cri-containerd-65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322.scope - libcontainer container 65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322. Sep 9 00:29:06.608716 containerd[1585]: time="2025-09-09T00:29:06.608649894Z" level=info msg="StartContainer for \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" returns successfully" Sep 9 00:29:06.623213 systemd[1]: cri-containerd-65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322.scope: Deactivated successfully. Sep 9 00:29:06.625004 containerd[1585]: time="2025-09-09T00:29:06.624962896Z" level=info msg="received exit event container_id:\"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" id:\"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" pid:3206 exited_at:{seconds:1757377746 nanos:624240180}" Sep 9 00:29:06.625084 containerd[1585]: time="2025-09-09T00:29:06.625063935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" id:\"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" pid:3206 exited_at:{seconds:1757377746 nanos:624240180}" Sep 9 00:29:06.652327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322-rootfs.mount: Deactivated successfully. Sep 9 00:29:07.110499 kubelet[2783]: E0909 00:29:07.110423 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:08.115867 kubelet[2783]: E0909 00:29:08.114966 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:08.120622 containerd[1585]: time="2025-09-09T00:29:08.119635840Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:29:08.151017 containerd[1585]: time="2025-09-09T00:29:08.150950079Z" level=info msg="Container a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:08.155195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003343116.mount: Deactivated successfully. Sep 9 00:29:08.161104 containerd[1585]: time="2025-09-09T00:29:08.161041959Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\"" Sep 9 00:29:08.161724 containerd[1585]: time="2025-09-09T00:29:08.161692539Z" level=info msg="StartContainer for \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\"" Sep 9 00:29:08.162584 containerd[1585]: time="2025-09-09T00:29:08.162558904Z" level=info msg="connecting to shim a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" protocol=ttrpc version=3 Sep 9 00:29:08.185990 systemd[1]: Started cri-containerd-a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67.scope - libcontainer container a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67. Sep 9 00:29:08.223480 containerd[1585]: time="2025-09-09T00:29:08.223424215Z" level=info msg="StartContainer for \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" returns successfully" Sep 9 00:29:08.241270 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:29:08.241613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:29:08.242019 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:29:08.244748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:29:08.247478 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:29:08.248102 systemd[1]: cri-containerd-a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67.scope: Deactivated successfully. Sep 9 00:29:08.249852 containerd[1585]: time="2025-09-09T00:29:08.249682390Z" level=info msg="received exit event container_id:\"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" id:\"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" pid:3249 exited_at:{seconds:1757377748 nanos:249276808}" Sep 9 00:29:08.249941 containerd[1585]: time="2025-09-09T00:29:08.249860053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" id:\"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" pid:3249 exited_at:{seconds:1757377748 nanos:249276808}" Sep 9 00:29:08.288027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:29:09.110296 containerd[1585]: time="2025-09-09T00:29:09.110231298Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:29:09.111452 containerd[1585]: time="2025-09-09T00:29:09.111406942Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:29:09.112708 containerd[1585]: time="2025-09-09T00:29:09.112652007Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:29:09.114122 containerd[1585]: time="2025-09-09T00:29:09.114050400Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.613763924s" Sep 9 00:29:09.114122 containerd[1585]: time="2025-09-09T00:29:09.114095164Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:29:09.116659 containerd[1585]: time="2025-09-09T00:29:09.116584895Z" level=info msg="CreateContainer within sandbox \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:29:09.120244 kubelet[2783]: E0909 00:29:09.119855 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:09.122666 containerd[1585]: time="2025-09-09T00:29:09.122619715Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:29:09.128746 containerd[1585]: time="2025-09-09T00:29:09.128690472Z" level=info msg="Container 29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:09.140792 containerd[1585]: time="2025-09-09T00:29:09.140732259Z" level=info msg="Container d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:09.150861 containerd[1585]: time="2025-09-09T00:29:09.150796496Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\"" Sep 9 00:29:09.151607 containerd[1585]: time="2025-09-09T00:29:09.151563095Z" level=info msg="CreateContainer within sandbox \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\"" Sep 9 00:29:09.151999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67-rootfs.mount: Deactivated successfully. Sep 9 00:29:09.152411 containerd[1585]: time="2025-09-09T00:29:09.152347716Z" level=info msg="StartContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\"" Sep 9 00:29:09.152653 containerd[1585]: time="2025-09-09T00:29:09.152629826Z" level=info msg="StartContainer for \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\"" Sep 9 00:29:09.154000 containerd[1585]: time="2025-09-09T00:29:09.153946606Z" level=info msg="connecting to shim d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" protocol=ttrpc version=3 Sep 9 00:29:09.155311 containerd[1585]: time="2025-09-09T00:29:09.155281049Z" level=info msg="connecting to shim 29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8" address="unix:///run/containerd/s/0c3adfbbc57fa6a6ddfb7e7c53910e5fa99c45628dea827da7eee6ce74e343f4" protocol=ttrpc version=3 Sep 9 00:29:09.183726 systemd[1]: Started cri-containerd-29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8.scope - libcontainer container 29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8. Sep 9 00:29:09.187013 systemd[1]: Started cri-containerd-d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7.scope - libcontainer container d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7. Sep 9 00:29:09.222771 containerd[1585]: time="2025-09-09T00:29:09.222659542Z" level=info msg="StartContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" returns successfully" Sep 9 00:29:09.234584 systemd[1]: cri-containerd-d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7.scope: Deactivated successfully. Sep 9 00:29:09.236571 containerd[1585]: time="2025-09-09T00:29:09.236532345Z" level=info msg="received exit event container_id:\"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" id:\"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" pid:3331 exited_at:{seconds:1757377749 nanos:235963127}" Sep 9 00:29:09.236972 containerd[1585]: time="2025-09-09T00:29:09.236839631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" id:\"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" pid:3331 exited_at:{seconds:1757377749 nanos:235963127}" Sep 9 00:29:09.238603 containerd[1585]: time="2025-09-09T00:29:09.238551301Z" level=info msg="StartContainer for \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" returns successfully" Sep 9 00:29:09.262504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7-rootfs.mount: Deactivated successfully. Sep 9 00:29:10.122454 kubelet[2783]: E0909 00:29:10.122395 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:10.125701 kubelet[2783]: E0909 00:29:10.125678 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:10.127343 containerd[1585]: time="2025-09-09T00:29:10.127306794Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:29:10.400430 containerd[1585]: time="2025-09-09T00:29:10.400383601Z" level=info msg="Container f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:10.403114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464955863.mount: Deactivated successfully. Sep 9 00:29:10.463159 containerd[1585]: time="2025-09-09T00:29:10.463009898Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\"" Sep 9 00:29:10.465345 containerd[1585]: time="2025-09-09T00:29:10.465180449Z" level=info msg="StartContainer for \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\"" Sep 9 00:29:10.472108 containerd[1585]: time="2025-09-09T00:29:10.471812990Z" level=info msg="connecting to shim f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" protocol=ttrpc version=3 Sep 9 00:29:10.491817 kubelet[2783]: I0909 00:29:10.491729 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s62sr" podStartSLOduration=3.998231902 podStartE2EDuration="24.491705072s" podCreationTimestamp="2025-09-09 00:28:46 +0000 UTC" firstStartedPulling="2025-09-09 00:28:48.621198146 +0000 UTC m=+6.525569734" lastFinishedPulling="2025-09-09 00:29:09.114671316 +0000 UTC m=+27.019042904" observedRunningTime="2025-09-09 00:29:10.406328913 +0000 UTC m=+28.310700491" watchObservedRunningTime="2025-09-09 00:29:10.491705072 +0000 UTC m=+28.396076660" Sep 9 00:29:10.532887 systemd[1]: Started cri-containerd-f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5.scope - libcontainer container f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5. Sep 9 00:29:10.592416 systemd[1]: cri-containerd-f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5.scope: Deactivated successfully. Sep 9 00:29:10.593634 containerd[1585]: time="2025-09-09T00:29:10.593164949Z" level=info msg="received exit event container_id:\"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" id:\"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" pid:3386 exited_at:{seconds:1757377750 nanos:592748226}" Sep 9 00:29:10.594460 containerd[1585]: time="2025-09-09T00:29:10.594436714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" id:\"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" pid:3386 exited_at:{seconds:1757377750 nanos:592748226}" Sep 9 00:29:10.604673 containerd[1585]: time="2025-09-09T00:29:10.604618853Z" level=info msg="StartContainer for \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" returns successfully" Sep 9 00:29:10.619356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5-rootfs.mount: Deactivated successfully. Sep 9 00:29:11.131614 kubelet[2783]: E0909 00:29:11.130721 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:11.131614 kubelet[2783]: E0909 00:29:11.130766 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:11.132759 containerd[1585]: time="2025-09-09T00:29:11.132714056Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:29:11.146791 containerd[1585]: time="2025-09-09T00:29:11.146731775Z" level=info msg="Container 08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:11.156654 containerd[1585]: time="2025-09-09T00:29:11.156580971Z" level=info msg="CreateContainer within sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\"" Sep 9 00:29:11.157128 containerd[1585]: time="2025-09-09T00:29:11.157091908Z" level=info msg="StartContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\"" Sep 9 00:29:11.158120 containerd[1585]: time="2025-09-09T00:29:11.158088252Z" level=info msg="connecting to shim 08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44" address="unix:///run/containerd/s/7d11750f1398d7ed92696faf14fcfb61872f30c44ebcda56563dce6e1a4c2273" protocol=ttrpc version=3 Sep 9 00:29:11.180184 systemd[1]: Started cri-containerd-08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44.scope - libcontainer container 08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44. Sep 9 00:29:11.255462 containerd[1585]: time="2025-09-09T00:29:11.255402938Z" level=info msg="StartContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" returns successfully" Sep 9 00:29:11.349251 containerd[1585]: time="2025-09-09T00:29:11.349200399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" id:\"a971d2e251814009eed0e6b58ff3840e1e27278de8a064fa55b97036bda5dfdd\" pid:3452 exited_at:{seconds:1757377751 nanos:348756362}" Sep 9 00:29:11.385060 kubelet[2783]: I0909 00:29:11.385029 2783 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:29:11.400681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584465696.mount: Deactivated successfully. Sep 9 00:29:11.428807 systemd[1]: Created slice kubepods-burstable-pod9974f10f_fb7a_4310_ac7b_4a0695b02c50.slice - libcontainer container kubepods-burstable-pod9974f10f_fb7a_4310_ac7b_4a0695b02c50.slice. Sep 9 00:29:11.436854 systemd[1]: Created slice kubepods-burstable-pod5b61abd1_ab07_4dd6_9e48_71f9198c9ca2.slice - libcontainer container kubepods-burstable-pod5b61abd1_ab07_4dd6_9e48_71f9198c9ca2.slice. Sep 9 00:29:11.467418 kubelet[2783]: I0909 00:29:11.467355 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69gf\" (UniqueName: \"kubernetes.io/projected/9974f10f-fb7a-4310-ac7b-4a0695b02c50-kube-api-access-p69gf\") pod \"coredns-668d6bf9bc-b247g\" (UID: \"9974f10f-fb7a-4310-ac7b-4a0695b02c50\") " pod="kube-system/coredns-668d6bf9bc-b247g" Sep 9 00:29:11.467418 kubelet[2783]: I0909 00:29:11.467399 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b61abd1-ab07-4dd6-9e48-71f9198c9ca2-config-volume\") pod \"coredns-668d6bf9bc-5dn44\" (UID: \"5b61abd1-ab07-4dd6-9e48-71f9198c9ca2\") " pod="kube-system/coredns-668d6bf9bc-5dn44" Sep 9 00:29:11.467418 kubelet[2783]: I0909 00:29:11.467423 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9974f10f-fb7a-4310-ac7b-4a0695b02c50-config-volume\") pod \"coredns-668d6bf9bc-b247g\" (UID: \"9974f10f-fb7a-4310-ac7b-4a0695b02c50\") " pod="kube-system/coredns-668d6bf9bc-b247g" Sep 9 00:29:11.467682 kubelet[2783]: I0909 00:29:11.467443 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nd7v\" (UniqueName: \"kubernetes.io/projected/5b61abd1-ab07-4dd6-9e48-71f9198c9ca2-kube-api-access-9nd7v\") pod \"coredns-668d6bf9bc-5dn44\" (UID: \"5b61abd1-ab07-4dd6-9e48-71f9198c9ca2\") " pod="kube-system/coredns-668d6bf9bc-5dn44" Sep 9 00:29:11.734550 kubelet[2783]: E0909 00:29:11.734421 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:11.735870 containerd[1585]: time="2025-09-09T00:29:11.735759616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b247g,Uid:9974f10f-fb7a-4310-ac7b-4a0695b02c50,Namespace:kube-system,Attempt:0,}" Sep 9 00:29:11.740307 kubelet[2783]: E0909 00:29:11.740243 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:11.740961 containerd[1585]: time="2025-09-09T00:29:11.740913021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5dn44,Uid:5b61abd1-ab07-4dd6-9e48-71f9198c9ca2,Namespace:kube-system,Attempt:0,}" Sep 9 00:29:12.157200 kubelet[2783]: E0909 00:29:12.157098 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:12.416164 kubelet[2783]: I0909 00:29:12.416008 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kj9wx" podStartSLOduration=8.274945339 podStartE2EDuration="26.415982799s" podCreationTimestamp="2025-09-09 00:28:46 +0000 UTC" firstStartedPulling="2025-09-09 00:28:48.358757023 +0000 UTC m=+6.263128621" lastFinishedPulling="2025-09-09 00:29:06.499794493 +0000 UTC m=+24.404166081" observedRunningTime="2025-09-09 00:29:12.415910329 +0000 UTC m=+30.320281928" watchObservedRunningTime="2025-09-09 00:29:12.415982799 +0000 UTC m=+30.320354387" Sep 9 00:29:13.159614 kubelet[2783]: E0909 00:29:13.159555 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:13.771034 systemd-networkd[1479]: cilium_host: Link UP Sep 9 00:29:13.771202 systemd-networkd[1479]: cilium_net: Link UP Sep 9 00:29:13.771403 systemd-networkd[1479]: cilium_host: Gained carrier Sep 9 00:29:13.771613 systemd-networkd[1479]: cilium_net: Gained carrier Sep 9 00:29:13.880880 systemd-networkd[1479]: cilium_vxlan: Link UP Sep 9 00:29:13.881070 systemd-networkd[1479]: cilium_vxlan: Gained carrier Sep 9 00:29:14.047887 systemd-networkd[1479]: cilium_host: Gained IPv6LL Sep 9 00:29:14.159817 systemd-networkd[1479]: cilium_net: Gained IPv6LL Sep 9 00:29:14.162706 kubelet[2783]: E0909 00:29:14.162545 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:14.168654 kernel: NET: Registered PF_ALG protocol family Sep 9 00:29:14.486360 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Sep 9 00:29:14.569982 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:14.572275 sshd-session[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:14.579724 systemd-logind[1555]: New session 8 of user core. Sep 9 00:29:14.590904 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:29:14.744264 sshd[3769]: Connection closed by 10.0.0.1 port 32974 Sep 9 00:29:14.745815 sshd-session[3695]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:14.751494 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:32974.service: Deactivated successfully. Sep 9 00:29:14.754025 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:29:14.755088 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:29:14.756913 systemd-logind[1555]: Removed session 8. Sep 9 00:29:15.054396 systemd-networkd[1479]: lxc_health: Link UP Sep 9 00:29:15.054773 systemd-networkd[1479]: lxc_health: Gained carrier Sep 9 00:29:15.332728 kernel: eth0: renamed from tmp2a24f Sep 9 00:29:15.332581 systemd-networkd[1479]: lxca22322c620e9: Link UP Sep 9 00:29:15.336109 kernel: eth0: renamed from tmp77c77 Sep 9 00:29:15.335833 systemd-networkd[1479]: lxc9b2293703fbb: Link UP Sep 9 00:29:15.338039 systemd-networkd[1479]: lxc9b2293703fbb: Gained carrier Sep 9 00:29:15.338794 systemd-networkd[1479]: lxca22322c620e9: Gained carrier Sep 9 00:29:15.456930 systemd-networkd[1479]: cilium_vxlan: Gained IPv6LL Sep 9 00:29:16.672187 systemd-networkd[1479]: lxc_health: Gained IPv6LL Sep 9 00:29:16.672700 systemd-networkd[1479]: lxc9b2293703fbb: Gained IPv6LL Sep 9 00:29:16.698201 kubelet[2783]: E0909 00:29:16.698072 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:16.800019 systemd-networkd[1479]: lxca22322c620e9: Gained IPv6LL Sep 9 00:29:17.170720 kubelet[2783]: E0909 00:29:17.170643 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:18.173715 kubelet[2783]: E0909 00:29:18.173660 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:19.781387 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:32982.service - OpenSSH per-connection server daemon (10.0.0.1:32982). Sep 9 00:29:19.908654 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 32982 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:19.911152 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:19.919209 systemd-logind[1555]: New session 9 of user core. Sep 9 00:29:19.933899 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:29:20.110119 sshd[3951]: Connection closed by 10.0.0.1 port 32982 Sep 9 00:29:20.111736 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:20.119439 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:32982.service: Deactivated successfully. Sep 9 00:29:20.122561 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:29:20.124549 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:29:20.126798 systemd-logind[1555]: Removed session 9. Sep 9 00:29:21.476782 containerd[1585]: time="2025-09-09T00:29:21.476696156Z" level=info msg="connecting to shim 77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de" address="unix:///run/containerd/s/077163c8c4c7d497cdfd79075bb49d8e2f4421c59f6c35e59a22f761515e5af8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:29:21.477554 containerd[1585]: time="2025-09-09T00:29:21.477525206Z" level=info msg="connecting to shim 2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0" address="unix:///run/containerd/s/2a9ffb217be8d15f3850fa976e1113f138832cb1950735cc37f8b07b62ddedf6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:29:21.527776 systemd[1]: Started cri-containerd-2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0.scope - libcontainer container 2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0. Sep 9 00:29:21.529799 systemd[1]: Started cri-containerd-77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de.scope - libcontainer container 77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de. Sep 9 00:29:21.546126 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:29:21.548950 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:29:21.581484 containerd[1585]: time="2025-09-09T00:29:21.581409957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b247g,Uid:9974f10f-fb7a-4310-ac7b-4a0695b02c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0\"" Sep 9 00:29:21.582211 kubelet[2783]: E0909 00:29:21.582177 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:21.587013 containerd[1585]: time="2025-09-09T00:29:21.586971305Z" level=info msg="CreateContainer within sandbox \"2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:29:21.604296 containerd[1585]: time="2025-09-09T00:29:21.604242090Z" level=info msg="Container 9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:21.617928 containerd[1585]: time="2025-09-09T00:29:21.617846874Z" level=info msg="CreateContainer within sandbox \"2a24f7a4c056dec896d9f388457fc4e7dfe613ed3e8f4e41a470f33c861c3ba0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0\"" Sep 9 00:29:21.618142 containerd[1585]: time="2025-09-09T00:29:21.618091363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5dn44,Uid:5b61abd1-ab07-4dd6-9e48-71f9198c9ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de\"" Sep 9 00:29:21.618584 containerd[1585]: time="2025-09-09T00:29:21.618557738Z" level=info msg="StartContainer for \"9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0\"" Sep 9 00:29:21.619169 kubelet[2783]: E0909 00:29:21.619061 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:21.620857 containerd[1585]: time="2025-09-09T00:29:21.620804607Z" level=info msg="connecting to shim 9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0" address="unix:///run/containerd/s/2a9ffb217be8d15f3850fa976e1113f138832cb1950735cc37f8b07b62ddedf6" protocol=ttrpc version=3 Sep 9 00:29:21.625982 containerd[1585]: time="2025-09-09T00:29:21.625916802Z" level=info msg="CreateContainer within sandbox \"77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:29:21.644699 containerd[1585]: time="2025-09-09T00:29:21.644639122Z" level=info msg="Container 15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:29:21.655933 containerd[1585]: time="2025-09-09T00:29:21.655837611Z" level=info msg="CreateContainer within sandbox \"77c7742223872f3c4f0f7e77fe4b682acdaca1e9b0dcad68ade4e681959a82de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab\"" Sep 9 00:29:21.655933 systemd[1]: Started cri-containerd-9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0.scope - libcontainer container 9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0. Sep 9 00:29:21.657614 containerd[1585]: time="2025-09-09T00:29:21.657397914Z" level=info msg="StartContainer for \"15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab\"" Sep 9 00:29:21.660282 containerd[1585]: time="2025-09-09T00:29:21.660248421Z" level=info msg="connecting to shim 15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab" address="unix:///run/containerd/s/077163c8c4c7d497cdfd79075bb49d8e2f4421c59f6c35e59a22f761515e5af8" protocol=ttrpc version=3 Sep 9 00:29:21.688314 systemd[1]: Started cri-containerd-15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab.scope - libcontainer container 15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab. Sep 9 00:29:21.715801 containerd[1585]: time="2025-09-09T00:29:21.715756246Z" level=info msg="StartContainer for \"9bbbe469cb91cf29c81a2555f9624818a1cb91342ce2db41a7a2f54985eb10a0\" returns successfully" Sep 9 00:29:21.739125 containerd[1585]: time="2025-09-09T00:29:21.738834369Z" level=info msg="StartContainer for \"15a4821185ecf371ad0255b5bdbdaec357467ca7220cd5cef79608534f5084ab\" returns successfully" Sep 9 00:29:22.201206 kubelet[2783]: E0909 00:29:22.200824 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:22.205450 kubelet[2783]: E0909 00:29:22.205392 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:22.223062 kubelet[2783]: I0909 00:29:22.222968 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b247g" podStartSLOduration=36.222940215 podStartE2EDuration="36.222940215s" podCreationTimestamp="2025-09-09 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:29:22.222882996 +0000 UTC m=+40.127254594" watchObservedRunningTime="2025-09-09 00:29:22.222940215 +0000 UTC m=+40.127311803" Sep 9 00:29:22.260163 kubelet[2783]: I0909 00:29:22.260072 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5dn44" podStartSLOduration=36.260050431 podStartE2EDuration="36.260050431s" podCreationTimestamp="2025-09-09 00:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:29:22.257422363 +0000 UTC m=+40.161793952" watchObservedRunningTime="2025-09-09 00:29:22.260050431 +0000 UTC m=+40.164422019" Sep 9 00:29:23.207245 kubelet[2783]: E0909 00:29:23.207178 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:23.207245 kubelet[2783]: E0909 00:29:23.207178 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:24.224534 kubelet[2783]: E0909 00:29:24.224431 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:24.224534 kubelet[2783]: E0909 00:29:24.224583 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:25.130375 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:48414.service - OpenSSH per-connection server daemon (10.0.0.1:48414). Sep 9 00:29:25.215899 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 48414 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:25.218335 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:25.228417 systemd-logind[1555]: New session 10 of user core. Sep 9 00:29:25.235920 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:29:25.508499 sshd[4142]: Connection closed by 10.0.0.1 port 48414 Sep 9 00:29:25.508922 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:25.513698 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:48414.service: Deactivated successfully. Sep 9 00:29:25.515843 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:29:25.516773 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:29:25.518122 systemd-logind[1555]: Removed session 10. Sep 9 00:29:30.530615 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:35772.service - OpenSSH per-connection server daemon (10.0.0.1:35772). Sep 9 00:29:30.591662 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 35772 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:30.593527 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:30.598765 systemd-logind[1555]: New session 11 of user core. Sep 9 00:29:30.604722 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:29:30.729160 sshd[4160]: Connection closed by 10.0.0.1 port 35772 Sep 9 00:29:30.729570 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:30.735176 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:35772.service: Deactivated successfully. Sep 9 00:29:30.737488 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:29:30.738547 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:29:30.742892 systemd-logind[1555]: Removed session 11. Sep 9 00:29:35.748334 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:35782.service - OpenSSH per-connection server daemon (10.0.0.1:35782). Sep 9 00:29:35.816139 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 35782 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:35.818423 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:35.829412 systemd-logind[1555]: New session 12 of user core. Sep 9 00:29:35.843020 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:29:36.038017 sshd[4178]: Connection closed by 10.0.0.1 port 35782 Sep 9 00:29:36.038361 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:36.045661 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:35782.service: Deactivated successfully. Sep 9 00:29:36.049098 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:29:36.050260 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:29:36.052119 systemd-logind[1555]: Removed session 12. Sep 9 00:29:41.057934 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:35638.service - OpenSSH per-connection server daemon (10.0.0.1:35638). Sep 9 00:29:41.126653 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 35638 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:41.128863 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:41.139020 systemd-logind[1555]: New session 13 of user core. Sep 9 00:29:41.153938 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:29:41.384486 sshd[4195]: Connection closed by 10.0.0.1 port 35638 Sep 9 00:29:41.385179 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:41.405527 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:35638.service: Deactivated successfully. Sep 9 00:29:41.408464 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:29:41.409721 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:29:41.413306 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:35642.service - OpenSSH per-connection server daemon (10.0.0.1:35642). Sep 9 00:29:41.414202 systemd-logind[1555]: Removed session 13. Sep 9 00:29:41.481061 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:41.482934 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:41.488962 systemd-logind[1555]: New session 14 of user core. Sep 9 00:29:41.494789 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:29:42.291630 sshd[4213]: Connection closed by 10.0.0.1 port 35642 Sep 9 00:29:42.292186 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:42.308735 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:35642.service: Deactivated successfully. Sep 9 00:29:42.310654 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:29:42.311625 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:29:42.314385 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:35648.service - OpenSSH per-connection server daemon (10.0.0.1:35648). Sep 9 00:29:42.315078 systemd-logind[1555]: Removed session 14. Sep 9 00:29:42.383440 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 35648 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:42.385261 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:42.390425 systemd-logind[1555]: New session 15 of user core. Sep 9 00:29:42.404721 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:29:42.880610 sshd[4230]: Connection closed by 10.0.0.1 port 35648 Sep 9 00:29:42.880980 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:42.886607 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:35648.service: Deactivated successfully. Sep 9 00:29:42.889680 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:29:42.890725 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:29:42.892697 systemd-logind[1555]: Removed session 15. Sep 9 00:29:47.900225 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:35664.service - OpenSSH per-connection server daemon (10.0.0.1:35664). Sep 9 00:29:47.963618 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 35664 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:47.965687 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:47.971513 systemd-logind[1555]: New session 16 of user core. Sep 9 00:29:47.983880 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:29:48.116256 sshd[4247]: Connection closed by 10.0.0.1 port 35664 Sep 9 00:29:48.116727 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:48.123258 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:35664.service: Deactivated successfully. Sep 9 00:29:48.125801 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:29:48.127061 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:29:48.128746 systemd-logind[1555]: Removed session 16. Sep 9 00:29:53.133022 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:33146.service - OpenSSH per-connection server daemon (10.0.0.1:33146). Sep 9 00:29:53.193485 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 33146 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:53.196138 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:53.201362 systemd-logind[1555]: New session 17 of user core. Sep 9 00:29:53.212865 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:29:53.563769 sshd[4267]: Connection closed by 10.0.0.1 port 33146 Sep 9 00:29:53.564154 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:53.568534 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:33146.service: Deactivated successfully. Sep 9 00:29:53.570781 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:29:53.571686 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:29:53.573151 systemd-logind[1555]: Removed session 17. Sep 9 00:29:54.207501 kubelet[2783]: E0909 00:29:54.207455 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:58.581192 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). Sep 9 00:29:58.657730 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:58.660073 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:58.665996 systemd-logind[1555]: New session 18 of user core. Sep 9 00:29:58.679963 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:29:58.805546 sshd[4283]: Connection closed by 10.0.0.1 port 33158 Sep 9 00:29:58.805985 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:58.811766 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:33158.service: Deactivated successfully. Sep 9 00:29:58.813882 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:29:58.814938 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:29:58.816519 systemd-logind[1555]: Removed session 18. Sep 9 00:29:59.207438 kubelet[2783]: E0909 00:29:59.207372 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:00.208014 kubelet[2783]: E0909 00:30:00.207960 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:03.829375 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:38704.service - OpenSSH per-connection server daemon (10.0.0.1:38704). Sep 9 00:30:03.885187 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 38704 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:03.886765 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:03.892885 systemd-logind[1555]: New session 19 of user core. Sep 9 00:30:03.898731 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:30:04.018781 sshd[4300]: Connection closed by 10.0.0.1 port 38704 Sep 9 00:30:04.019199 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:04.023923 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:38704.service: Deactivated successfully. Sep 9 00:30:04.026192 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:30:04.029020 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:30:04.030178 systemd-logind[1555]: Removed session 19. Sep 9 00:30:09.045006 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:38706.service - OpenSSH per-connection server daemon (10.0.0.1:38706). Sep 9 00:30:09.107916 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 38706 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:09.110781 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:09.117991 systemd-logind[1555]: New session 20 of user core. Sep 9 00:30:09.132911 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:30:09.253826 sshd[4316]: Connection closed by 10.0.0.1 port 38706 Sep 9 00:30:09.254233 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:09.259334 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:38706.service: Deactivated successfully. Sep 9 00:30:09.261733 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:30:09.263054 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:30:09.264405 systemd-logind[1555]: Removed session 20. Sep 9 00:30:14.270830 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:60546.service - OpenSSH per-connection server daemon (10.0.0.1:60546). Sep 9 00:30:14.339026 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 60546 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:14.340878 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:14.346543 systemd-logind[1555]: New session 21 of user core. Sep 9 00:30:14.358942 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:30:14.480134 sshd[4332]: Connection closed by 10.0.0.1 port 60546 Sep 9 00:30:14.480559 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:14.485858 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:60546.service: Deactivated successfully. Sep 9 00:30:14.488379 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:30:14.489297 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:30:14.491197 systemd-logind[1555]: Removed session 21. Sep 9 00:30:19.496873 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). Sep 9 00:30:19.556489 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:19.557914 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:19.562741 systemd-logind[1555]: New session 22 of user core. Sep 9 00:30:19.577840 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:30:19.691966 sshd[4348]: Connection closed by 10.0.0.1 port 60552 Sep 9 00:30:19.692344 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:19.697821 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:60552.service: Deactivated successfully. Sep 9 00:30:19.700307 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:30:19.701475 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:30:19.702929 systemd-logind[1555]: Removed session 22. Sep 9 00:30:22.207855 kubelet[2783]: E0909 00:30:22.207801 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:22.208421 kubelet[2783]: E0909 00:30:22.207801 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:23.207305 kubelet[2783]: E0909 00:30:23.207219 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:24.708833 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:44118.service - OpenSSH per-connection server daemon (10.0.0.1:44118). Sep 9 00:30:24.765700 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 44118 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:24.767640 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:24.772414 systemd-logind[1555]: New session 23 of user core. Sep 9 00:30:24.785757 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:30:24.907407 sshd[4367]: Connection closed by 10.0.0.1 port 44118 Sep 9 00:30:24.907794 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:24.912072 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:44118.service: Deactivated successfully. Sep 9 00:30:24.914332 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:30:24.915320 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:30:24.916975 systemd-logind[1555]: Removed session 23. Sep 9 00:30:29.920839 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:55002.service - OpenSSH per-connection server daemon (10.0.0.1:55002). Sep 9 00:30:29.987778 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 55002 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:29.989883 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:29.995441 systemd-logind[1555]: New session 24 of user core. Sep 9 00:30:30.004883 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:30:30.126482 sshd[4384]: Connection closed by 10.0.0.1 port 55002 Sep 9 00:30:30.126906 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:30.132757 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:55002.service: Deactivated successfully. Sep 9 00:30:30.135744 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:30:30.136712 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:30:30.138640 systemd-logind[1555]: Removed session 24. Sep 9 00:30:35.140633 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Sep 9 00:30:35.205530 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:35.207226 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:35.211823 systemd-logind[1555]: New session 25 of user core. Sep 9 00:30:35.225705 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:30:35.338455 sshd[4400]: Connection closed by 10.0.0.1 port 55014 Sep 9 00:30:35.338860 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:35.343088 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:55014.service: Deactivated successfully. Sep 9 00:30:35.345390 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:30:35.346365 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:30:35.347807 systemd-logind[1555]: Removed session 25. Sep 9 00:30:38.207287 kubelet[2783]: E0909 00:30:38.207225 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:40.354663 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:44742.service - OpenSSH per-connection server daemon (10.0.0.1:44742). Sep 9 00:30:40.429687 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 44742 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:40.431568 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:40.436954 systemd-logind[1555]: New session 26 of user core. Sep 9 00:30:40.446799 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:30:40.558263 sshd[4417]: Connection closed by 10.0.0.1 port 44742 Sep 9 00:30:40.558727 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:40.572785 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:44742.service: Deactivated successfully. Sep 9 00:30:40.575003 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:30:40.575828 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:30:40.579044 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:44748.service - OpenSSH per-connection server daemon (10.0.0.1:44748). Sep 9 00:30:40.579761 systemd-logind[1555]: Removed session 26. Sep 9 00:30:40.647950 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 44748 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:40.649912 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:40.655291 systemd-logind[1555]: New session 27 of user core. Sep 9 00:30:40.665846 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:30:41.156038 sshd[4433]: Connection closed by 10.0.0.1 port 44748 Sep 9 00:30:41.156755 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:41.167679 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:44748.service: Deactivated successfully. Sep 9 00:30:41.169913 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:30:41.171035 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:30:41.174557 systemd[1]: Started sshd@28-10.0.0.55:22-10.0.0.1:44754.service - OpenSSH per-connection server daemon (10.0.0.1:44754). Sep 9 00:30:41.175611 systemd-logind[1555]: Removed session 27. Sep 9 00:30:41.241005 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:41.243218 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:41.248621 systemd-logind[1555]: New session 28 of user core. Sep 9 00:30:41.263845 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:30:41.816835 sshd[4447]: Connection closed by 10.0.0.1 port 44754 Sep 9 00:30:41.818575 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:41.827979 systemd[1]: sshd@28-10.0.0.55:22-10.0.0.1:44754.service: Deactivated successfully. Sep 9 00:30:41.830427 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:30:41.831407 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:30:41.835026 systemd[1]: Started sshd@29-10.0.0.55:22-10.0.0.1:44764.service - OpenSSH per-connection server daemon (10.0.0.1:44764). Sep 9 00:30:41.835727 systemd-logind[1555]: Removed session 28. Sep 9 00:30:41.891523 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 44764 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:41.893483 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:41.898826 systemd-logind[1555]: New session 29 of user core. Sep 9 00:30:41.907786 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 00:30:42.168008 sshd[4473]: Connection closed by 10.0.0.1 port 44764 Sep 9 00:30:42.168465 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:42.181656 systemd[1]: sshd@29-10.0.0.55:22-10.0.0.1:44764.service: Deactivated successfully. Sep 9 00:30:42.184007 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 00:30:42.185055 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Sep 9 00:30:42.188108 systemd[1]: Started sshd@30-10.0.0.55:22-10.0.0.1:44778.service - OpenSSH per-connection server daemon (10.0.0.1:44778). Sep 9 00:30:42.189001 systemd-logind[1555]: Removed session 29. Sep 9 00:30:42.251015 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 44778 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:42.252924 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:42.258161 systemd-logind[1555]: New session 30 of user core. Sep 9 00:30:42.268877 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 00:30:42.390775 sshd[4489]: Connection closed by 10.0.0.1 port 44778 Sep 9 00:30:42.391226 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:42.397223 systemd[1]: sshd@30-10.0.0.55:22-10.0.0.1:44778.service: Deactivated successfully. Sep 9 00:30:42.399807 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 00:30:42.401025 systemd-logind[1555]: Session 30 logged out. Waiting for processes to exit. Sep 9 00:30:42.402882 systemd-logind[1555]: Removed session 30. Sep 9 00:30:44.207567 kubelet[2783]: E0909 00:30:44.207498 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:47.406884 systemd[1]: Started sshd@31-10.0.0.55:22-10.0.0.1:44792.service - OpenSSH per-connection server daemon (10.0.0.1:44792). Sep 9 00:30:47.462021 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 44792 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:47.463717 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:47.469680 systemd-logind[1555]: New session 31 of user core. Sep 9 00:30:47.483921 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 00:30:47.606415 sshd[4505]: Connection closed by 10.0.0.1 port 44792 Sep 9 00:30:47.606797 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:47.611187 systemd[1]: sshd@31-10.0.0.55:22-10.0.0.1:44792.service: Deactivated successfully. Sep 9 00:30:47.613498 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 00:30:47.614764 systemd-logind[1555]: Session 31 logged out. Waiting for processes to exit. Sep 9 00:30:47.616242 systemd-logind[1555]: Removed session 31. Sep 9 00:30:52.620932 systemd[1]: Started sshd@32-10.0.0.55:22-10.0.0.1:45120.service - OpenSSH per-connection server daemon (10.0.0.1:45120). Sep 9 00:30:52.691942 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 45120 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:52.694210 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:52.699250 systemd-logind[1555]: New session 32 of user core. Sep 9 00:30:52.706824 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 00:30:52.842157 sshd[4525]: Connection closed by 10.0.0.1 port 45120 Sep 9 00:30:52.842731 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:52.848989 systemd[1]: sshd@32-10.0.0.55:22-10.0.0.1:45120.service: Deactivated successfully. Sep 9 00:30:52.851310 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 00:30:52.852187 systemd-logind[1555]: Session 32 logged out. Waiting for processes to exit. Sep 9 00:30:52.853833 systemd-logind[1555]: Removed session 32. Sep 9 00:30:57.860676 systemd[1]: Started sshd@33-10.0.0.55:22-10.0.0.1:45136.service - OpenSSH per-connection server daemon (10.0.0.1:45136). Sep 9 00:30:57.929851 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 45136 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:30:57.931825 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:57.936860 systemd-logind[1555]: New session 33 of user core. Sep 9 00:30:57.946890 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 9 00:30:58.076542 sshd[4542]: Connection closed by 10.0.0.1 port 45136 Sep 9 00:30:58.076956 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:58.081138 systemd[1]: sshd@33-10.0.0.55:22-10.0.0.1:45136.service: Deactivated successfully. Sep 9 00:30:58.083882 systemd[1]: session-33.scope: Deactivated successfully. Sep 9 00:30:58.084905 systemd-logind[1555]: Session 33 logged out. Waiting for processes to exit. Sep 9 00:30:58.086423 systemd-logind[1555]: Removed session 33. Sep 9 00:30:58.207779 kubelet[2783]: E0909 00:30:58.207730 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:03.090127 systemd[1]: Started sshd@34-10.0.0.55:22-10.0.0.1:50402.service - OpenSSH per-connection server daemon (10.0.0.1:50402). Sep 9 00:31:03.155513 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 50402 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:31:03.157578 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:03.162745 systemd-logind[1555]: New session 34 of user core. Sep 9 00:31:03.174871 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 9 00:31:03.290295 sshd[4558]: Connection closed by 10.0.0.1 port 50402 Sep 9 00:31:03.290902 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:03.303964 systemd[1]: sshd@34-10.0.0.55:22-10.0.0.1:50402.service: Deactivated successfully. Sep 9 00:31:03.306215 systemd[1]: session-34.scope: Deactivated successfully. Sep 9 00:31:03.307309 systemd-logind[1555]: Session 34 logged out. Waiting for processes to exit. Sep 9 00:31:03.311005 systemd[1]: Started sshd@35-10.0.0.55:22-10.0.0.1:50416.service - OpenSSH per-connection server daemon (10.0.0.1:50416). Sep 9 00:31:03.311961 systemd-logind[1555]: Removed session 34. Sep 9 00:31:03.377571 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 50416 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:31:03.379186 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:03.384091 systemd-logind[1555]: New session 35 of user core. Sep 9 00:31:03.394745 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 9 00:31:04.877998 containerd[1585]: time="2025-09-09T00:31:04.868558251Z" level=info msg="StopContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" with timeout 30 (s)" Sep 9 00:31:04.890455 containerd[1585]: time="2025-09-09T00:31:04.890405232Z" level=info msg="Stop container \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" with signal terminated" Sep 9 00:31:04.903489 systemd[1]: cri-containerd-29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8.scope: Deactivated successfully. Sep 9 00:31:04.905849 containerd[1585]: time="2025-09-09T00:31:04.905806768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" id:\"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" pid:3324 exited_at:{seconds:1757377864 nanos:905090210}" Sep 9 00:31:04.905934 containerd[1585]: time="2025-09-09T00:31:04.905886349Z" level=info msg="received exit event container_id:\"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" id:\"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" pid:3324 exited_at:{seconds:1757377864 nanos:905090210}" Sep 9 00:31:04.915096 containerd[1585]: time="2025-09-09T00:31:04.915011428Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:31:04.916188 containerd[1585]: time="2025-09-09T00:31:04.916151695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" id:\"9d0575713ba6cc634418a2e5226af0d0120159847ef8a88a960b9e19d5d064f4\" pid:4596 exited_at:{seconds:1757377864 nanos:915823306}" Sep 9 00:31:04.920738 containerd[1585]: time="2025-09-09T00:31:04.920685531Z" level=info msg="StopContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" with timeout 2 (s)" Sep 9 00:31:04.921166 containerd[1585]: time="2025-09-09T00:31:04.921131771Z" level=info msg="Stop container \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" with signal terminated" Sep 9 00:31:04.930341 systemd-networkd[1479]: lxc_health: Link DOWN Sep 9 00:31:04.930354 systemd-networkd[1479]: lxc_health: Lost carrier Sep 9 00:31:04.939562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8-rootfs.mount: Deactivated successfully. Sep 9 00:31:04.948418 containerd[1585]: time="2025-09-09T00:31:04.948360654Z" level=info msg="received exit event container_id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" pid:3423 exited_at:{seconds:1757377864 nanos:947978525}" Sep 9 00:31:04.948424 systemd[1]: cri-containerd-08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44.scope: Deactivated successfully. Sep 9 00:31:04.948947 systemd[1]: cri-containerd-08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44.scope: Consumed 9.507s CPU time, 125.7M memory peak, 1M read from disk, 13.3M written to disk. Sep 9 00:31:04.949175 containerd[1585]: time="2025-09-09T00:31:04.949094025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" id:\"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" pid:3423 exited_at:{seconds:1757377864 nanos:947978525}" Sep 9 00:31:04.959829 containerd[1585]: time="2025-09-09T00:31:04.959581069Z" level=info msg="StopContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" returns successfully" Sep 9 00:31:04.962334 containerd[1585]: time="2025-09-09T00:31:04.962288367Z" level=info msg="StopPodSandbox for \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\"" Sep 9 00:31:04.968252 containerd[1585]: time="2025-09-09T00:31:04.968185799Z" level=info msg="Container to stop \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:04.974645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44-rootfs.mount: Deactivated successfully. Sep 9 00:31:04.977193 systemd[1]: cri-containerd-ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a.scope: Deactivated successfully. Sep 9 00:31:04.977770 systemd[1]: cri-containerd-ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a.scope: Consumed 46ms CPU time, 5.2M memory peak, 3.2M read from disk. Sep 9 00:31:04.984179 containerd[1585]: time="2025-09-09T00:31:04.984140086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" id:\"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" pid:2970 exit_status:137 exited_at:{seconds:1757377864 nanos:983845091}" Sep 9 00:31:05.008485 containerd[1585]: time="2025-09-09T00:31:05.008428664Z" level=info msg="StopContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" returns successfully" Sep 9 00:31:05.009139 containerd[1585]: time="2025-09-09T00:31:05.009112913Z" level=info msg="StopPodSandbox for \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\"" Sep 9 00:31:05.009188 containerd[1585]: time="2025-09-09T00:31:05.009179317Z" level=info msg="Container to stop \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:05.009245 containerd[1585]: time="2025-09-09T00:31:05.009190659Z" level=info msg="Container to stop \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:05.009245 containerd[1585]: time="2025-09-09T00:31:05.009201759Z" level=info msg="Container to stop \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:05.009245 containerd[1585]: time="2025-09-09T00:31:05.009209714Z" level=info msg="Container to stop \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:05.009245 containerd[1585]: time="2025-09-09T00:31:05.009217279Z" level=info msg="Container to stop \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:31:05.017423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a-rootfs.mount: Deactivated successfully. Sep 9 00:31:05.018856 systemd[1]: cri-containerd-91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6.scope: Deactivated successfully. Sep 9 00:31:05.026148 containerd[1585]: time="2025-09-09T00:31:05.026064968Z" level=info msg="shim disconnected" id=ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a namespace=k8s.io Sep 9 00:31:05.026148 containerd[1585]: time="2025-09-09T00:31:05.026114150Z" level=warning msg="cleaning up after shim disconnected" id=ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a namespace=k8s.io Sep 9 00:31:05.044082 containerd[1585]: time="2025-09-09T00:31:05.026127184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:31:05.059473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6-rootfs.mount: Deactivated successfully. Sep 9 00:31:05.061892 containerd[1585]: time="2025-09-09T00:31:05.061848315Z" level=info msg="shim disconnected" id=91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6 namespace=k8s.io Sep 9 00:31:05.061892 containerd[1585]: time="2025-09-09T00:31:05.061888390Z" level=warning msg="cleaning up after shim disconnected" id=91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6 namespace=k8s.io Sep 9 00:31:05.062048 containerd[1585]: time="2025-09-09T00:31:05.061904009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:31:05.070368 containerd[1585]: time="2025-09-09T00:31:05.070287984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" id:\"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" pid:2972 exit_status:137 exited_at:{seconds:1757377865 nanos:19155940}" Sep 9 00:31:05.072572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a-shm.mount: Deactivated successfully. Sep 9 00:31:05.077921 containerd[1585]: time="2025-09-09T00:31:05.077870069Z" level=info msg="received exit event sandbox_id:\"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" exit_status:137 exited_at:{seconds:1757377865 nanos:19155940}" Sep 9 00:31:05.079265 containerd[1585]: time="2025-09-09T00:31:05.078791153Z" level=info msg="received exit event sandbox_id:\"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" exit_status:137 exited_at:{seconds:1757377864 nanos:983845091}" Sep 9 00:31:05.083570 containerd[1585]: time="2025-09-09T00:31:05.083496211Z" level=info msg="TearDown network for sandbox \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" successfully" Sep 9 00:31:05.083570 containerd[1585]: time="2025-09-09T00:31:05.083548640Z" level=info msg="StopPodSandbox for \"91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6\" returns successfully" Sep 9 00:31:05.087892 containerd[1585]: time="2025-09-09T00:31:05.087850287Z" level=info msg="TearDown network for sandbox \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" successfully" Sep 9 00:31:05.087892 containerd[1585]: time="2025-09-09T00:31:05.087886877Z" level=info msg="StopPodSandbox for \"ade8ca9cdb67788fc1137262670a333ff763a7101c10aabb452fbf3ed6ee219a\" returns successfully" Sep 9 00:31:05.217485 kubelet[2783]: I0909 00:31:05.217403 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-hostproc\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.217485 kubelet[2783]: I0909 00:31:05.217469 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-xtables-lock\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.217485 kubelet[2783]: I0909 00:31:05.217493 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-lib-modules\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217522 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p7dz\" (UniqueName: \"kubernetes.io/projected/333c568a-1b02-486b-82d0-8e6f2887b470-kube-api-access-4p7dz\") pod \"333c568a-1b02-486b-82d0-8e6f2887b470\" (UID: \"333c568a-1b02-486b-82d0-8e6f2887b470\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217554 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cni-path\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217576 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-hubble-tls\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217627 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-etc-cni-netd\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217655 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/902923d8-9055-4891-9346-e5e9a8cef271-cilium-config-path\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218117 kubelet[2783]: I0909 00:31:05.217674 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-run\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218280 kubelet[2783]: I0909 00:31:05.217650 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-hostproc" (OuterVolumeSpecName: "hostproc") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.218280 kubelet[2783]: I0909 00:31:05.217690 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-cgroup\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218280 kubelet[2783]: I0909 00:31:05.217744 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.218280 kubelet[2783]: I0909 00:31:05.217780 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cni-path" (OuterVolumeSpecName: "cni-path") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.218280 kubelet[2783]: I0909 00:31:05.217785 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-bpf-maps\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217813 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/333c568a-1b02-486b-82d0-8e6f2887b470-cilium-config-path\") pod \"333c568a-1b02-486b-82d0-8e6f2887b470\" (UID: \"333c568a-1b02-486b-82d0-8e6f2887b470\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217834 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-net\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217857 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/902923d8-9055-4891-9346-e5e9a8cef271-clustermesh-secrets\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217873 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsf26\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-kube-api-access-hsf26\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217888 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-kernel\") pod \"902923d8-9055-4891-9346-e5e9a8cef271\" (UID: \"902923d8-9055-4891-9346-e5e9a8cef271\") " Sep 9 00:31:05.218406 kubelet[2783]: I0909 00:31:05.217938 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.218663 kubelet[2783]: I0909 00:31:05.217948 2783 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.218663 kubelet[2783]: I0909 00:31:05.217957 2783 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.218663 kubelet[2783]: I0909 00:31:05.217980 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.218663 kubelet[2783]: I0909 00:31:05.218000 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.222628 kubelet[2783]: I0909 00:31:05.222181 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333c568a-1b02-486b-82d0-8e6f2887b470-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "333c568a-1b02-486b-82d0-8e6f2887b470" (UID: "333c568a-1b02-486b-82d0-8e6f2887b470"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:31:05.222628 kubelet[2783]: I0909 00:31:05.222291 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902923d8-9055-4891-9346-e5e9a8cef271-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:31:05.222628 kubelet[2783]: I0909 00:31:05.222348 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.222628 kubelet[2783]: I0909 00:31:05.222375 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.222628 kubelet[2783]: I0909 00:31:05.222398 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.223347 kubelet[2783]: I0909 00:31:05.223323 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:31:05.223452 kubelet[2783]: I0909 00:31:05.223438 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.223528 kubelet[2783]: I0909 00:31:05.223514 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:31:05.224020 kubelet[2783]: I0909 00:31:05.223962 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333c568a-1b02-486b-82d0-8e6f2887b470-kube-api-access-4p7dz" (OuterVolumeSpecName: "kube-api-access-4p7dz") pod "333c568a-1b02-486b-82d0-8e6f2887b470" (UID: "333c568a-1b02-486b-82d0-8e6f2887b470"). InnerVolumeSpecName "kube-api-access-4p7dz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:31:05.225986 kubelet[2783]: I0909 00:31:05.225939 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902923d8-9055-4891-9346-e5e9a8cef271-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:31:05.226206 kubelet[2783]: I0909 00:31:05.226177 2783 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-kube-api-access-hsf26" (OuterVolumeSpecName: "kube-api-access-hsf26") pod "902923d8-9055-4891-9346-e5e9a8cef271" (UID: "902923d8-9055-4891-9346-e5e9a8cef271"). InnerVolumeSpecName "kube-api-access-hsf26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318544 2783 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318638 2783 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318652 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/333c568a-1b02-486b-82d0-8e6f2887b470-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318671 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/902923d8-9055-4891-9346-e5e9a8cef271-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318681 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318689 2783 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318696 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.318680 kubelet[2783]: I0909 00:31:05.318703 2783 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/902923d8-9055-4891-9346-e5e9a8cef271-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.319055 kubelet[2783]: I0909 00:31:05.318711 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hsf26\" (UniqueName: \"kubernetes.io/projected/902923d8-9055-4891-9346-e5e9a8cef271-kube-api-access-hsf26\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.319055 kubelet[2783]: I0909 00:31:05.318720 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.319055 kubelet[2783]: I0909 00:31:05.318729 2783 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.319055 kubelet[2783]: I0909 00:31:05.318738 2783 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/902923d8-9055-4891-9346-e5e9a8cef271-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.319055 kubelet[2783]: I0909 00:31:05.318745 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4p7dz\" (UniqueName: \"kubernetes.io/projected/333c568a-1b02-486b-82d0-8e6f2887b470-kube-api-access-4p7dz\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:05.476876 kubelet[2783]: I0909 00:31:05.476664 2783 scope.go:117] "RemoveContainer" containerID="29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8" Sep 9 00:31:05.479392 containerd[1585]: time="2025-09-09T00:31:05.479353643Z" level=info msg="RemoveContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\"" Sep 9 00:31:05.485403 containerd[1585]: time="2025-09-09T00:31:05.485354040Z" level=info msg="RemoveContainer for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" returns successfully" Sep 9 00:31:05.489255 systemd[1]: Removed slice kubepods-besteffort-pod333c568a_1b02_486b_82d0_8e6f2887b470.slice - libcontainer container kubepods-besteffort-pod333c568a_1b02_486b_82d0_8e6f2887b470.slice. Sep 9 00:31:05.489381 systemd[1]: kubepods-besteffort-pod333c568a_1b02_486b_82d0_8e6f2887b470.slice: Consumed 495ms CPU time, 28.5M memory peak, 3.2M read from disk, 4K written to disk. Sep 9 00:31:05.490664 kubelet[2783]: I0909 00:31:05.490194 2783 scope.go:117] "RemoveContainer" containerID="29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8" Sep 9 00:31:05.490608 systemd[1]: Removed slice kubepods-burstable-pod902923d8_9055_4891_9346_e5e9a8cef271.slice - libcontainer container kubepods-burstable-pod902923d8_9055_4891_9346_e5e9a8cef271.slice. Sep 9 00:31:05.490695 systemd[1]: kubepods-burstable-pod902923d8_9055_4891_9346_e5e9a8cef271.slice: Consumed 9.637s CPU time, 126M memory peak, 1.1M read from disk, 13.3M written to disk. Sep 9 00:31:05.498213 containerd[1585]: time="2025-09-09T00:31:05.492219065Z" level=error msg="ContainerStatus for \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\": not found" Sep 9 00:31:05.499628 kubelet[2783]: E0909 00:31:05.499574 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\": not found" containerID="29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8" Sep 9 00:31:05.499800 kubelet[2783]: I0909 00:31:05.499644 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8"} err="failed to get container status \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d99ccb10a346af10bd70a6fb0221917440a0cb0111172b3ac1937956fad4a8\": not found" Sep 9 00:31:05.499800 kubelet[2783]: I0909 00:31:05.499790 2783 scope.go:117] "RemoveContainer" containerID="08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44" Sep 9 00:31:05.502178 containerd[1585]: time="2025-09-09T00:31:05.502146996Z" level=info msg="RemoveContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\"" Sep 9 00:31:05.507889 containerd[1585]: time="2025-09-09T00:31:05.507841417Z" level=info msg="RemoveContainer for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" returns successfully" Sep 9 00:31:05.508184 kubelet[2783]: I0909 00:31:05.508138 2783 scope.go:117] "RemoveContainer" containerID="f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5" Sep 9 00:31:05.511279 containerd[1585]: time="2025-09-09T00:31:05.511235977Z" level=info msg="RemoveContainer for \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\"" Sep 9 00:31:05.517068 containerd[1585]: time="2025-09-09T00:31:05.517030847Z" level=info msg="RemoveContainer for \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" returns successfully" Sep 9 00:31:05.517261 kubelet[2783]: I0909 00:31:05.517230 2783 scope.go:117] "RemoveContainer" containerID="d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7" Sep 9 00:31:05.522648 containerd[1585]: time="2025-09-09T00:31:05.522322329Z" level=info msg="RemoveContainer for \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\"" Sep 9 00:31:05.527560 containerd[1585]: time="2025-09-09T00:31:05.527528711Z" level=info msg="RemoveContainer for \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" returns successfully" Sep 9 00:31:05.527743 kubelet[2783]: I0909 00:31:05.527719 2783 scope.go:117] "RemoveContainer" containerID="a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67" Sep 9 00:31:05.536094 containerd[1585]: time="2025-09-09T00:31:05.536056255Z" level=info msg="RemoveContainer for \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\"" Sep 9 00:31:05.540111 containerd[1585]: time="2025-09-09T00:31:05.540087725Z" level=info msg="RemoveContainer for \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" returns successfully" Sep 9 00:31:05.540358 kubelet[2783]: I0909 00:31:05.540330 2783 scope.go:117] "RemoveContainer" containerID="65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322" Sep 9 00:31:05.542134 containerd[1585]: time="2025-09-09T00:31:05.541774551Z" level=info msg="RemoveContainer for \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\"" Sep 9 00:31:05.545274 containerd[1585]: time="2025-09-09T00:31:05.545234003Z" level=info msg="RemoveContainer for \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" returns successfully" Sep 9 00:31:05.545441 kubelet[2783]: I0909 00:31:05.545385 2783 scope.go:117] "RemoveContainer" containerID="08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44" Sep 9 00:31:05.545622 containerd[1585]: time="2025-09-09T00:31:05.545553565Z" level=error msg="ContainerStatus for \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\": not found" Sep 9 00:31:05.545718 kubelet[2783]: E0909 00:31:05.545681 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\": not found" containerID="08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44" Sep 9 00:31:05.545809 kubelet[2783]: I0909 00:31:05.545742 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44"} err="failed to get container status \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\": rpc error: code = NotFound desc = an error occurred when try to find container \"08aa20bc6a7c64501c87623ced6d8fc13344c0e627b294e5749fb12791546b44\": not found" Sep 9 00:31:05.545809 kubelet[2783]: I0909 00:31:05.545772 2783 scope.go:117] "RemoveContainer" containerID="f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5" Sep 9 00:31:05.545948 containerd[1585]: time="2025-09-09T00:31:05.545907432Z" level=error msg="ContainerStatus for \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\": not found" Sep 9 00:31:05.546132 kubelet[2783]: E0909 00:31:05.546083 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\": not found" containerID="f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5" Sep 9 00:31:05.546200 kubelet[2783]: I0909 00:31:05.546133 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5"} err="failed to get container status \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f71540ef56559d023a5d0adea7146bfa2bab5e7f068aebb500510d9cdc3010b5\": not found" Sep 9 00:31:05.546200 kubelet[2783]: I0909 00:31:05.546176 2783 scope.go:117] "RemoveContainer" containerID="d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7" Sep 9 00:31:05.546488 containerd[1585]: time="2025-09-09T00:31:05.546418364Z" level=error msg="ContainerStatus for \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\": not found" Sep 9 00:31:05.546568 kubelet[2783]: E0909 00:31:05.546540 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\": not found" containerID="d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7" Sep 9 00:31:05.546636 kubelet[2783]: I0909 00:31:05.546572 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7"} err="failed to get container status \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d271d86b94b040c61c74e71213a301edc6da186614340822cf9ef28751bb5ee7\": not found" Sep 9 00:31:05.546636 kubelet[2783]: I0909 00:31:05.546620 2783 scope.go:117] "RemoveContainer" containerID="a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67" Sep 9 00:31:05.546817 containerd[1585]: time="2025-09-09T00:31:05.546785083Z" level=error msg="ContainerStatus for \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\": not found" Sep 9 00:31:05.546937 kubelet[2783]: E0909 00:31:05.546910 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\": not found" containerID="a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67" Sep 9 00:31:05.546992 kubelet[2783]: I0909 00:31:05.546939 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67"} err="failed to get container status \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8800dcfd65f86d96fb3c61ba1d494c2807ed03bde8bc9fbcd4625602a24ac67\": not found" Sep 9 00:31:05.546992 kubelet[2783]: I0909 00:31:05.546955 2783 scope.go:117] "RemoveContainer" containerID="65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322" Sep 9 00:31:05.547176 containerd[1585]: time="2025-09-09T00:31:05.547120475Z" level=error msg="ContainerStatus for \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\": not found" Sep 9 00:31:05.547323 kubelet[2783]: E0909 00:31:05.547282 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\": not found" containerID="65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322" Sep 9 00:31:05.547323 kubelet[2783]: I0909 00:31:05.547312 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322"} err="failed to get container status \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\": rpc error: code = NotFound desc = an error occurred when try to find container \"65786cee575cdf0b138cb70501ae48bd0481e171983d546ec38fe00fe3ee2322\": not found" Sep 9 00:31:05.937923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91ffb8224006bfd62a6e1c2f733e0cedb82d846a6986a867b5507b8cb165fbb6-shm.mount: Deactivated successfully. Sep 9 00:31:05.938060 systemd[1]: var-lib-kubelet-pods-333c568a\x2d1b02\x2d486b\x2d82d0\x2d8e6f2887b470-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4p7dz.mount: Deactivated successfully. Sep 9 00:31:05.938224 systemd[1]: var-lib-kubelet-pods-902923d8\x2d9055\x2d4891\x2d9346\x2de5e9a8cef271-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhsf26.mount: Deactivated successfully. Sep 9 00:31:05.938385 systemd[1]: var-lib-kubelet-pods-902923d8\x2d9055\x2d4891\x2d9346\x2de5e9a8cef271-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:31:05.938481 systemd[1]: var-lib-kubelet-pods-902923d8\x2d9055\x2d4891\x2d9346\x2de5e9a8cef271-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:31:06.207881 kubelet[2783]: E0909 00:31:06.207721 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:06.210378 kubelet[2783]: I0909 00:31:06.210315 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333c568a-1b02-486b-82d0-8e6f2887b470" path="/var/lib/kubelet/pods/333c568a-1b02-486b-82d0-8e6f2887b470/volumes" Sep 9 00:31:06.211050 kubelet[2783]: I0909 00:31:06.210985 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902923d8-9055-4891-9346-e5e9a8cef271" path="/var/lib/kubelet/pods/902923d8-9055-4891-9346-e5e9a8cef271/volumes" Sep 9 00:31:06.827340 sshd[4575]: Connection closed by 10.0.0.1 port 50416 Sep 9 00:31:06.828274 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:06.839276 systemd[1]: sshd@35-10.0.0.55:22-10.0.0.1:50416.service: Deactivated successfully. Sep 9 00:31:06.841680 systemd[1]: session-35.scope: Deactivated successfully. Sep 9 00:31:06.842632 systemd-logind[1555]: Session 35 logged out. Waiting for processes to exit. Sep 9 00:31:06.845544 systemd[1]: Started sshd@36-10.0.0.55:22-10.0.0.1:50424.service - OpenSSH per-connection server daemon (10.0.0.1:50424). Sep 9 00:31:06.846193 systemd-logind[1555]: Removed session 35. Sep 9 00:31:06.915141 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 50424 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:31:06.917123 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:06.922088 systemd-logind[1555]: New session 36 of user core. Sep 9 00:31:06.936851 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 9 00:31:07.290279 kubelet[2783]: E0909 00:31:07.290223 2783 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:31:07.635686 sshd[4726]: Connection closed by 10.0.0.1 port 50424 Sep 9 00:31:07.636067 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:07.648775 systemd[1]: sshd@36-10.0.0.55:22-10.0.0.1:50424.service: Deactivated successfully. Sep 9 00:31:07.655302 systemd[1]: session-36.scope: Deactivated successfully. Sep 9 00:31:07.658352 systemd-logind[1555]: Session 36 logged out. Waiting for processes to exit. Sep 9 00:31:07.661930 systemd[1]: Started sshd@37-10.0.0.55:22-10.0.0.1:50432.service - OpenSSH per-connection server daemon (10.0.0.1:50432). Sep 9 00:31:07.667350 systemd-logind[1555]: Removed session 36. Sep 9 00:31:07.671689 kubelet[2783]: I0909 00:31:07.671644 2783 memory_manager.go:355] "RemoveStaleState removing state" podUID="902923d8-9055-4891-9346-e5e9a8cef271" containerName="cilium-agent" Sep 9 00:31:07.671689 kubelet[2783]: I0909 00:31:07.671676 2783 memory_manager.go:355] "RemoveStaleState removing state" podUID="333c568a-1b02-486b-82d0-8e6f2887b470" containerName="cilium-operator" Sep 9 00:31:07.699763 systemd[1]: Created slice kubepods-burstable-pod0875b9ec_97b3_461a_98dd_472604a5ae14.slice - libcontainer container kubepods-burstable-pod0875b9ec_97b3_461a_98dd_472604a5ae14.slice. Sep 9 00:31:07.732286 kubelet[2783]: I0909 00:31:07.732212 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqlvq\" (UniqueName: \"kubernetes.io/projected/0875b9ec-97b3-461a-98dd-472604a5ae14-kube-api-access-qqlvq\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732286 kubelet[2783]: I0909 00:31:07.732269 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-bpf-maps\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732286 kubelet[2783]: I0909 00:31:07.732288 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-etc-cni-netd\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732286 kubelet[2783]: I0909 00:31:07.732301 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-xtables-lock\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732316 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0875b9ec-97b3-461a-98dd-472604a5ae14-cilium-config-path\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732330 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0875b9ec-97b3-461a-98dd-472604a5ae14-hubble-tls\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732348 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-hostproc\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732361 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-cilium-cgroup\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732374 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-lib-modules\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732520 kubelet[2783]: I0909 00:31:07.732387 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-host-proc-sys-kernel\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732741 kubelet[2783]: I0909 00:31:07.732401 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-host-proc-sys-net\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732741 kubelet[2783]: I0909 00:31:07.732420 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-cilium-run\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732741 kubelet[2783]: I0909 00:31:07.732435 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0875b9ec-97b3-461a-98dd-472604a5ae14-clustermesh-secrets\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732741 kubelet[2783]: I0909 00:31:07.732449 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0875b9ec-97b3-461a-98dd-472604a5ae14-cni-path\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.732741 kubelet[2783]: I0909 00:31:07.732470 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0875b9ec-97b3-461a-98dd-472604a5ae14-cilium-ipsec-secrets\") pod \"cilium-bpwft\" (UID: \"0875b9ec-97b3-461a-98dd-472604a5ae14\") " pod="kube-system/cilium-bpwft" Sep 9 00:31:07.737384 sshd[4738]: Accepted publickey for core from 10.0.0.1 port 50432 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:31:07.739180 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:07.744288 systemd-logind[1555]: New session 37 of user core. Sep 9 00:31:07.756752 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 9 00:31:07.809028 sshd[4741]: Connection closed by 10.0.0.1 port 50432 Sep 9 00:31:07.809459 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:07.818522 systemd[1]: sshd@37-10.0.0.55:22-10.0.0.1:50432.service: Deactivated successfully. Sep 9 00:31:07.820407 systemd[1]: session-37.scope: Deactivated successfully. Sep 9 00:31:07.821271 systemd-logind[1555]: Session 37 logged out. Waiting for processes to exit. Sep 9 00:31:07.824208 systemd[1]: Started sshd@38-10.0.0.55:22-10.0.0.1:50448.service - OpenSSH per-connection server daemon (10.0.0.1:50448). Sep 9 00:31:07.824847 systemd-logind[1555]: Removed session 37. Sep 9 00:31:07.890256 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 50448 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:31:07.892049 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:07.897814 systemd-logind[1555]: New session 38 of user core. Sep 9 00:31:07.904704 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 9 00:31:08.004107 kubelet[2783]: E0909 00:31:08.004043 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:08.004748 containerd[1585]: time="2025-09-09T00:31:08.004706335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bpwft,Uid:0875b9ec-97b3-461a-98dd-472604a5ae14,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:08.030490 containerd[1585]: time="2025-09-09T00:31:08.030420623Z" level=info msg="connecting to shim 066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:08.067859 systemd[1]: Started cri-containerd-066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd.scope - libcontainer container 066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd. Sep 9 00:31:08.100306 containerd[1585]: time="2025-09-09T00:31:08.100254497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bpwft,Uid:0875b9ec-97b3-461a-98dd-472604a5ae14,Namespace:kube-system,Attempt:0,} returns sandbox id \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\"" Sep 9 00:31:08.101219 kubelet[2783]: E0909 00:31:08.101194 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:08.103711 containerd[1585]: time="2025-09-09T00:31:08.103661731Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:31:08.114816 containerd[1585]: time="2025-09-09T00:31:08.114733384Z" level=info msg="Container 37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:08.135871 containerd[1585]: time="2025-09-09T00:31:08.135813167Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\"" Sep 9 00:31:08.136306 containerd[1585]: time="2025-09-09T00:31:08.136278163Z" level=info msg="StartContainer for \"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\"" Sep 9 00:31:08.137355 containerd[1585]: time="2025-09-09T00:31:08.137330605Z" level=info msg="connecting to shim 37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" protocol=ttrpc version=3 Sep 9 00:31:08.161782 systemd[1]: Started cri-containerd-37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e.scope - libcontainer container 37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e. Sep 9 00:31:08.203774 containerd[1585]: time="2025-09-09T00:31:08.203725745Z" level=info msg="StartContainer for \"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\" returns successfully" Sep 9 00:31:08.214331 systemd[1]: cri-containerd-37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e.scope: Deactivated successfully. Sep 9 00:31:08.217070 containerd[1585]: time="2025-09-09T00:31:08.217012879Z" level=info msg="received exit event container_id:\"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\" id:\"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\" pid:4824 exited_at:{seconds:1757377868 nanos:216742520}" Sep 9 00:31:08.217153 containerd[1585]: time="2025-09-09T00:31:08.217134448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\" id:\"37d0b570bd6009c39b7f48394e7c1d6b2009ae523c556e1f6663005059cd6b8e\" pid:4824 exited_at:{seconds:1757377868 nanos:216742520}" Sep 9 00:31:08.492544 kubelet[2783]: E0909 00:31:08.492369 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:08.494025 containerd[1585]: time="2025-09-09T00:31:08.493983535Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:31:08.502725 containerd[1585]: time="2025-09-09T00:31:08.502671631Z" level=info msg="Container 04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:08.510417 containerd[1585]: time="2025-09-09T00:31:08.510377217Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\"" Sep 9 00:31:08.511017 containerd[1585]: time="2025-09-09T00:31:08.510950626Z" level=info msg="StartContainer for \"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\"" Sep 9 00:31:08.512003 containerd[1585]: time="2025-09-09T00:31:08.511968713Z" level=info msg="connecting to shim 04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" protocol=ttrpc version=3 Sep 9 00:31:08.539066 systemd[1]: Started cri-containerd-04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a.scope - libcontainer container 04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a. Sep 9 00:31:08.574664 containerd[1585]: time="2025-09-09T00:31:08.574614394Z" level=info msg="StartContainer for \"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\" returns successfully" Sep 9 00:31:08.582199 systemd[1]: cri-containerd-04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a.scope: Deactivated successfully. Sep 9 00:31:08.582882 containerd[1585]: time="2025-09-09T00:31:08.582849037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\" id:\"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\" pid:4868 exited_at:{seconds:1757377868 nanos:582335080}" Sep 9 00:31:08.582955 containerd[1585]: time="2025-09-09T00:31:08.582888130Z" level=info msg="received exit event container_id:\"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\" id:\"04df1857cdd5fc470979c3e8b07bdd0364ed6c0963f311ed6dfed7eca3b5cc4a\" pid:4868 exited_at:{seconds:1757377868 nanos:582335080}" Sep 9 00:31:09.496954 kubelet[2783]: E0909 00:31:09.496899 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:09.503834 containerd[1585]: time="2025-09-09T00:31:09.503763178Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:31:09.540180 containerd[1585]: time="2025-09-09T00:31:09.540110522Z" level=info msg="Container 2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:09.552165 containerd[1585]: time="2025-09-09T00:31:09.552098699Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\"" Sep 9 00:31:09.552918 containerd[1585]: time="2025-09-09T00:31:09.552866895Z" level=info msg="StartContainer for \"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\"" Sep 9 00:31:09.554792 containerd[1585]: time="2025-09-09T00:31:09.554764217Z" level=info msg="connecting to shim 2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" protocol=ttrpc version=3 Sep 9 00:31:09.582844 systemd[1]: Started cri-containerd-2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a.scope - libcontainer container 2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a. Sep 9 00:31:09.631604 containerd[1585]: time="2025-09-09T00:31:09.631531130Z" level=info msg="StartContainer for \"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\" returns successfully" Sep 9 00:31:09.631714 systemd[1]: cri-containerd-2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a.scope: Deactivated successfully. Sep 9 00:31:09.633099 containerd[1585]: time="2025-09-09T00:31:09.633041984Z" level=info msg="received exit event container_id:\"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\" id:\"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\" pid:4912 exited_at:{seconds:1757377869 nanos:632472652}" Sep 9 00:31:09.634654 containerd[1585]: time="2025-09-09T00:31:09.633761689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\" id:\"2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a\" pid:4912 exited_at:{seconds:1757377869 nanos:632472652}" Sep 9 00:31:09.660686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dc88e02cdbee03e19adc825ca955db404f0654af6a6b08451dca43c3226910a-rootfs.mount: Deactivated successfully. Sep 9 00:31:10.502683 kubelet[2783]: E0909 00:31:10.502617 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:10.505207 containerd[1585]: time="2025-09-09T00:31:10.505146857Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:31:10.515439 containerd[1585]: time="2025-09-09T00:31:10.515388667Z" level=info msg="Container 9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:10.524864 containerd[1585]: time="2025-09-09T00:31:10.524815382Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\"" Sep 9 00:31:10.525401 containerd[1585]: time="2025-09-09T00:31:10.525358114Z" level=info msg="StartContainer for \"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\"" Sep 9 00:31:10.526258 containerd[1585]: time="2025-09-09T00:31:10.526211411Z" level=info msg="connecting to shim 9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" protocol=ttrpc version=3 Sep 9 00:31:10.551843 systemd[1]: Started cri-containerd-9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c.scope - libcontainer container 9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c. Sep 9 00:31:10.584358 systemd[1]: cri-containerd-9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c.scope: Deactivated successfully. Sep 9 00:31:10.585607 containerd[1585]: time="2025-09-09T00:31:10.585552828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\" id:\"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\" pid:4951 exited_at:{seconds:1757377870 nanos:584510726}" Sep 9 00:31:10.587676 containerd[1585]: time="2025-09-09T00:31:10.587618768Z" level=info msg="received exit event container_id:\"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\" id:\"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\" pid:4951 exited_at:{seconds:1757377870 nanos:584510726}" Sep 9 00:31:10.590003 containerd[1585]: time="2025-09-09T00:31:10.589864945Z" level=info msg="StartContainer for \"9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c\" returns successfully" Sep 9 00:31:10.611187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f451c5d34b25d1b75860b662bfe08e66de17d4f7bb395309d8503cd6dfa5c7c-rootfs.mount: Deactivated successfully. Sep 9 00:31:11.508144 kubelet[2783]: E0909 00:31:11.508084 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.510585 containerd[1585]: time="2025-09-09T00:31:11.510538846Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:31:11.524562 containerd[1585]: time="2025-09-09T00:31:11.524506739Z" level=info msg="Container e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:11.534833 containerd[1585]: time="2025-09-09T00:31:11.534778505Z" level=info msg="CreateContainer within sandbox \"066f11a25e4c43268a071ee323ef57a749867a3c153db9406eb1fa6540a9f5cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\"" Sep 9 00:31:11.535396 containerd[1585]: time="2025-09-09T00:31:11.535360760Z" level=info msg="StartContainer for \"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\"" Sep 9 00:31:11.537637 containerd[1585]: time="2025-09-09T00:31:11.537003694Z" level=info msg="connecting to shim e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed" address="unix:///run/containerd/s/223de88e96cc88648170dba41001167cef08600c611932e5b6f329a874a67207" protocol=ttrpc version=3 Sep 9 00:31:11.560783 systemd[1]: Started cri-containerd-e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed.scope - libcontainer container e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed. Sep 9 00:31:11.600508 containerd[1585]: time="2025-09-09T00:31:11.600430139Z" level=info msg="StartContainer for \"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" returns successfully" Sep 9 00:31:11.676160 containerd[1585]: time="2025-09-09T00:31:11.676099000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" id:\"2ded626113278c84dd89c56b3a8c3276bbb9f39bc23c08ab21e8c47701d56978\" pid:5020 exited_at:{seconds:1757377871 nanos:675492487}" Sep 9 00:31:12.074632 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 00:31:12.515112 kubelet[2783]: E0909 00:31:12.515052 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:12.615383 kubelet[2783]: I0909 00:31:12.615299 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bpwft" podStartSLOduration=5.61526835 podStartE2EDuration="5.61526835s" podCreationTimestamp="2025-09-09 00:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:12.614990697 +0000 UTC m=+150.519362315" watchObservedRunningTime="2025-09-09 00:31:12.61526835 +0000 UTC m=+150.519639938" Sep 9 00:31:14.005906 kubelet[2783]: E0909 00:31:14.005794 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:14.545629 containerd[1585]: time="2025-09-09T00:31:14.545559551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" id:\"1052f62420408cd1fca1a3165c7cd30996ca04a19235dc1fafb270737dee812e\" pid:5344 exit_status:1 exited_at:{seconds:1757377874 nanos:544955453}" Sep 9 00:31:15.243794 systemd-networkd[1479]: lxc_health: Link UP Sep 9 00:31:15.246139 systemd-networkd[1479]: lxc_health: Gained carrier Sep 9 00:31:16.006421 kubelet[2783]: E0909 00:31:16.006280 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:16.523704 kubelet[2783]: E0909 00:31:16.523668 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:16.543897 systemd-networkd[1479]: lxc_health: Gained IPv6LL Sep 9 00:31:16.662656 containerd[1585]: time="2025-09-09T00:31:16.662513479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" id:\"f9e5252e8b5db2e437ee9a8200105f3f53875c420bd23e2a6d148c3d4c1bd1c3\" pid:5555 exited_at:{seconds:1757377876 nanos:661910955}" Sep 9 00:31:17.525431 kubelet[2783]: E0909 00:31:17.525376 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:18.788170 containerd[1585]: time="2025-09-09T00:31:18.787884817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" id:\"559d530f47fe6d40d5579e348d03381cd24919f1de5248580ff3125e1a473415\" pid:5584 exited_at:{seconds:1757377878 nanos:787376610}" Sep 9 00:31:20.207999 kubelet[2783]: E0909 00:31:20.207915 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:20.902700 containerd[1585]: time="2025-09-09T00:31:20.902647631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02bc26aa45bf4992f42529c10a13d6803157f3d52d2d6ab4e2a521d0dd58eed\" id:\"ddb147dcce50fc1a0206f179c002647925d39115e0efffc1f0057d7863690ed5\" pid:5616 exited_at:{seconds:1757377880 nanos:902235929}" Sep 9 00:31:20.909208 sshd[4758]: Connection closed by 10.0.0.1 port 50448 Sep 9 00:31:20.909645 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:20.915861 systemd[1]: sshd@38-10.0.0.55:22-10.0.0.1:50448.service: Deactivated successfully. Sep 9 00:31:20.918721 systemd[1]: session-38.scope: Deactivated successfully. Sep 9 00:31:20.919800 systemd-logind[1555]: Session 38 logged out. Waiting for processes to exit. Sep 9 00:31:20.921424 systemd-logind[1555]: Removed session 38.