May 16 00:14:37.890681 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:16:42 -00 2025 May 16 00:14:37.890703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:14:37.890715 kernel: BIOS-provided physical RAM map: May 16 00:14:37.890722 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:14:37.890728 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:14:37.890735 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:14:37.890742 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:14:37.890749 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:14:37.890755 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:14:37.890762 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:14:37.890768 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 00:14:37.890777 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:14:37.890784 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:14:37.890790 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:14:37.890798 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:14:37.890805 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:14:37.890814 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:14:37.890821 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:14:37.890828 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:14:37.890835 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:14:37.890842 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:14:37.890849 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:14:37.890856 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:14:37.890863 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:14:37.890870 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:14:37.890877 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:14:37.890884 kernel: NX (Execute Disable) protection: active May 16 00:14:37.890893 kernel: APIC: Static calls initialized May 16 00:14:37.890900 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:14:37.890908 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:14:37.890914 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:14:37.890921 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:14:37.890928 kernel: extended physical RAM map: May 16 00:14:37.890935 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:14:37.890942 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:14:37.890949 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:14:37.890956 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:14:37.890963 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:14:37.890970 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:14:37.890980 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:14:37.890991 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 16 00:14:37.890998 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 16 00:14:37.891005 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 16 00:14:37.891012 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 16 00:14:37.891020 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 16 00:14:37.891029 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:14:37.891037 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:14:37.891044 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:14:37.891051 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:14:37.891058 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:14:37.891066 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:14:37.891073 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:14:37.891080 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:14:37.891087 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:14:37.891097 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:14:37.891105 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:14:37.891112 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:14:37.891119 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:14:37.891126 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:14:37.891134 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:14:37.891141 kernel: efi: EFI v2.7 by EDK II May 16 00:14:37.891148 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 16 00:14:37.891155 kernel: random: crng init done May 16 00:14:37.891173 kernel: efi: Remove mem141: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 00:14:37.891181 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 00:14:37.891196 kernel: secureboot: Secure boot disabled May 16 00:14:37.891206 kernel: SMBIOS 2.8 present. May 16 00:14:37.891227 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 00:14:37.891235 kernel: Hypervisor detected: KVM May 16 00:14:37.891242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:14:37.891250 kernel: kvm-clock: using sched offset of 2740956851 cycles May 16 00:14:37.891257 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:14:37.891265 kernel: tsc: Detected 2794.748 MHz processor May 16 00:14:37.891273 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:14:37.891280 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:14:37.891288 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 00:14:37.891298 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 00:14:37.891306 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:14:37.891313 kernel: Using GB pages for direct mapping May 16 00:14:37.891320 kernel: ACPI: Early table checksum verification disabled May 16 00:14:37.891328 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 00:14:37.891336 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 00:14:37.891343 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891351 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891358 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 00:14:37.891368 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891376 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891400 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891408 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:14:37.891416 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 00:14:37.891423 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 00:14:37.891430 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 00:14:37.891438 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 00:14:37.891445 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 00:14:37.891455 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 00:14:37.891462 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 00:14:37.891470 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 00:14:37.891477 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 00:14:37.891484 kernel: No NUMA configuration found May 16 00:14:37.891492 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 00:14:37.891499 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 16 00:14:37.891507 kernel: Zone ranges: May 16 00:14:37.891514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:14:37.891524 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 00:14:37.891531 kernel: Normal empty May 16 00:14:37.891539 kernel: Movable zone start for each node May 16 00:14:37.891546 kernel: Early memory node ranges May 16 00:14:37.891553 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 00:14:37.891561 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 00:14:37.891568 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 00:14:37.891575 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 00:14:37.891583 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 00:14:37.891590 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 00:14:37.891599 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 16 00:14:37.891607 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 16 00:14:37.891614 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 00:14:37.891621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:14:37.891629 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 00:14:37.891644 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 00:14:37.891654 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:14:37.891661 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 00:14:37.891677 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 00:14:37.891685 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 00:14:37.891692 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 00:14:37.891700 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 00:14:37.891710 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:14:37.891718 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:14:37.891726 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:14:37.891734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:14:37.891743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:14:37.891751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:14:37.891759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:14:37.891767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:14:37.891775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:14:37.891783 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:14:37.891790 kernel: TSC deadline timer available May 16 00:14:37.891798 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:14:37.891806 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:14:37.891813 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:14:37.891823 kernel: kvm-guest: setup PV sched yield May 16 00:14:37.891831 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 00:14:37.891839 kernel: Booting paravirtualized kernel on KVM May 16 00:14:37.891847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:14:37.891854 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:14:37.891862 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:14:37.891870 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:14:37.891877 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:14:37.891885 kernel: kvm-guest: PV spinlocks enabled May 16 00:14:37.891895 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:14:37.891904 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:14:37.891912 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:14:37.891920 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:14:37.891928 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:14:37.891936 kernel: Fallback order for Node 0: 0 May 16 00:14:37.891943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 16 00:14:37.891951 kernel: Policy zone: DMA32 May 16 00:14:37.891961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:14:37.891969 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) May 16 00:14:37.891977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:14:37.891985 kernel: ftrace: allocating 37922 entries in 149 pages May 16 00:14:37.891992 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:14:37.892000 kernel: Dynamic Preempt: voluntary May 16 00:14:37.892008 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:14:37.892016 kernel: rcu: RCU event tracing is enabled. May 16 00:14:37.892024 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:14:37.892034 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:14:37.892042 kernel: Rude variant of Tasks RCU enabled. May 16 00:14:37.892050 kernel: Tracing variant of Tasks RCU enabled. May 16 00:14:37.892058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:14:37.892065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:14:37.892073 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:14:37.892081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:14:37.892088 kernel: Console: colour dummy device 80x25 May 16 00:14:37.892096 kernel: printk: console [ttyS0] enabled May 16 00:14:37.892106 kernel: ACPI: Core revision 20230628 May 16 00:14:37.892114 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:14:37.892122 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:14:37.892130 kernel: x2apic enabled May 16 00:14:37.892137 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:14:37.892146 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:14:37.892154 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:14:37.892161 kernel: kvm-guest: setup PV IPIs May 16 00:14:37.892169 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:14:37.892179 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:14:37.892187 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:14:37.892194 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:14:37.892202 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:14:37.892210 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:14:37.892218 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:14:37.892225 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:14:37.892233 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:14:37.892241 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:14:37.892251 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:14:37.892258 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:14:37.892266 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:14:37.892274 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:14:37.892282 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:14:37.892290 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:14:37.892302 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:14:37.892310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:14:37.892324 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:14:37.892332 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:14:37.892343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:14:37.892358 kernel: Freeing SMP alternatives memory: 32K May 16 00:14:37.892375 kernel: pid_max: default: 32768 minimum: 301 May 16 00:14:37.892429 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:14:37.892443 kernel: landlock: Up and running. May 16 00:14:37.892460 kernel: SELinux: Initializing. May 16 00:14:37.892475 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:14:37.892495 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:14:37.892509 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:14:37.892523 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:14:37.892531 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:14:37.892539 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:14:37.892546 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:14:37.892554 kernel: ... version: 0 May 16 00:14:37.892562 kernel: ... bit width: 48 May 16 00:14:37.892569 kernel: ... generic registers: 6 May 16 00:14:37.892580 kernel: ... value mask: 0000ffffffffffff May 16 00:14:37.892588 kernel: ... max period: 00007fffffffffff May 16 00:14:37.892595 kernel: ... fixed-purpose events: 0 May 16 00:14:37.892603 kernel: ... event mask: 000000000000003f May 16 00:14:37.892610 kernel: signal: max sigframe size: 1776 May 16 00:14:37.892618 kernel: rcu: Hierarchical SRCU implementation. May 16 00:14:37.892626 kernel: rcu: Max phase no-delay instances is 400. May 16 00:14:37.892634 kernel: smp: Bringing up secondary CPUs ... May 16 00:14:37.892641 kernel: smpboot: x86: Booting SMP configuration: May 16 00:14:37.892651 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:14:37.892659 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:14:37.892675 kernel: smpboot: Max logical packages: 1 May 16 00:14:37.892683 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:14:37.892690 kernel: devtmpfs: initialized May 16 00:14:37.892698 kernel: x86/mm: Memory block size: 128MB May 16 00:14:37.892706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 00:14:37.892714 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 00:14:37.892722 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 00:14:37.892729 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 00:14:37.892747 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 16 00:14:37.892757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 00:14:37.892764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:14:37.892772 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:14:37.892780 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:14:37.892788 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:14:37.892796 kernel: audit: initializing netlink subsys (disabled) May 16 00:14:37.892804 kernel: audit: type=2000 audit(1747354477.735:1): state=initialized audit_enabled=0 res=1 May 16 00:14:37.892814 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:14:37.892822 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:14:37.892829 kernel: cpuidle: using governor menu May 16 00:14:37.892837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:14:37.892844 kernel: dca service started, version 1.12.1 May 16 00:14:37.892852 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 16 00:14:37.892860 kernel: PCI: Using configuration type 1 for base access May 16 00:14:37.892868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:14:37.892876 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:14:37.892886 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:14:37.892894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:14:37.892901 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:14:37.892909 kernel: ACPI: Added _OSI(Module Device) May 16 00:14:37.892917 kernel: ACPI: Added _OSI(Processor Device) May 16 00:14:37.892924 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:14:37.892932 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:14:37.892940 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:14:37.892948 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:14:37.892957 kernel: ACPI: Interpreter enabled May 16 00:14:37.892965 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:14:37.892973 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:14:37.892980 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:14:37.892988 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:14:37.892996 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:14:37.893003 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:14:37.893191 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:14:37.893330 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:14:37.893502 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:14:37.893513 kernel: PCI host bridge to bus 0000:00 May 16 00:14:37.893641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:14:37.893778 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:14:37.893895 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:14:37.894007 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 00:14:37.894132 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 00:14:37.894246 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 00:14:37.894359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:14:37.894528 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:14:37.894673 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:14:37.894810 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 16 00:14:37.895006 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 16 00:14:37.895136 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 16 00:14:37.895262 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 16 00:14:37.895404 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:14:37.895553 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:14:37.895689 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 16 00:14:37.895848 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 16 00:14:37.895986 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 16 00:14:37.896121 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:14:37.896250 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 16 00:14:37.896377 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 16 00:14:37.896534 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 16 00:14:37.896681 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:14:37.896809 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 16 00:14:37.896941 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 16 00:14:37.897069 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 00:14:37.897198 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 16 00:14:37.897331 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:14:37.897521 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:14:37.897657 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:14:37.897793 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 16 00:14:37.897923 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 16 00:14:37.898061 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:14:37.898193 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 16 00:14:37.898204 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:14:37.898212 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:14:37.898220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:14:37.898227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:14:37.898239 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:14:37.898247 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:14:37.898255 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:14:37.898262 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:14:37.898270 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:14:37.898278 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:14:37.898286 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:14:37.898293 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:14:37.898301 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:14:37.898311 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:14:37.898319 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:14:37.898327 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:14:37.898334 kernel: iommu: Default domain type: Translated May 16 00:14:37.898342 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:14:37.898349 kernel: efivars: Registered efivars operations May 16 00:14:37.898357 kernel: PCI: Using ACPI for IRQ routing May 16 00:14:37.898365 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:14:37.898373 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 00:14:37.898381 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 00:14:37.898416 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 16 00:14:37.898424 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 16 00:14:37.898432 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 00:14:37.898440 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 00:14:37.898447 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 16 00:14:37.898455 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 00:14:37.898583 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:14:37.898718 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:14:37.898848 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:14:37.898859 kernel: vgaarb: loaded May 16 00:14:37.898867 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:14:37.898875 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:14:37.898883 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:14:37.898891 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:14:37.898899 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:14:37.898906 kernel: pnp: PnP ACPI init May 16 00:14:37.899050 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 00:14:37.899066 kernel: pnp: PnP ACPI: found 6 devices May 16 00:14:37.899074 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:14:37.899082 kernel: NET: Registered PF_INET protocol family May 16 00:14:37.899090 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:14:37.899115 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:14:37.899127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:14:37.899135 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:14:37.899143 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:14:37.899153 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:14:37.899162 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:14:37.899170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:14:37.899178 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:14:37.899186 kernel: NET: Registered PF_XDP protocol family May 16 00:14:37.899316 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 16 00:14:37.899496 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 16 00:14:37.899613 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:14:37.899742 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:14:37.899855 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:14:37.899968 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 00:14:37.900079 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 00:14:37.900191 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 00:14:37.900202 kernel: PCI: CLS 0 bytes, default 64 May 16 00:14:37.900210 kernel: Initialise system trusted keyrings May 16 00:14:37.900218 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:14:37.900231 kernel: Key type asymmetric registered May 16 00:14:37.900239 kernel: Asymmetric key parser 'x509' registered May 16 00:14:37.900247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:14:37.900255 kernel: io scheduler mq-deadline registered May 16 00:14:37.900263 kernel: io scheduler kyber registered May 16 00:14:37.900271 kernel: io scheduler bfq registered May 16 00:14:37.900279 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:14:37.900287 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:14:37.900295 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:14:37.900307 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:14:37.900317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:14:37.900326 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:14:37.900334 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:14:37.900342 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:14:37.900350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:14:37.900361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:14:37.900508 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:14:37.900627 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:14:37.900759 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:14:37 UTC (1747354477) May 16 00:14:37.900879 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 00:14:37.900890 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:14:37.900898 kernel: efifb: probing for efifb May 16 00:14:37.900906 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 00:14:37.900918 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 00:14:37.900926 kernel: efifb: scrolling: redraw May 16 00:14:37.900935 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:14:37.900943 kernel: Console: switching to colour frame buffer device 160x50 May 16 00:14:37.900951 kernel: fb0: EFI VGA frame buffer device May 16 00:14:37.900959 kernel: pstore: Using crash dump compression: deflate May 16 00:14:37.900968 kernel: pstore: Registered efi_pstore as persistent store backend May 16 00:14:37.900976 kernel: NET: Registered PF_INET6 protocol family May 16 00:14:37.900984 kernel: Segment Routing with IPv6 May 16 00:14:37.900995 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:14:37.901003 kernel: NET: Registered PF_PACKET protocol family May 16 00:14:37.901011 kernel: Key type dns_resolver registered May 16 00:14:37.901021 kernel: IPI shorthand broadcast: enabled May 16 00:14:37.901030 kernel: sched_clock: Marking stable (616003063, 151925584)->(781163710, -13235063) May 16 00:14:37.901038 kernel: registered taskstats version 1 May 16 00:14:37.901046 kernel: Loading compiled-in X.509 certificates May 16 00:14:37.901054 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 22e80ca6ad28c00533ea5eb0843f23994a6e2a11' May 16 00:14:37.901062 kernel: Key type .fscrypt registered May 16 00:14:37.901073 kernel: Key type fscrypt-provisioning registered May 16 00:14:37.901081 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:14:37.901089 kernel: ima: Allocated hash algorithm: sha1 May 16 00:14:37.901097 kernel: ima: No architecture policies found May 16 00:14:37.901105 kernel: clk: Disabling unused clocks May 16 00:14:37.901113 kernel: Freeing unused kernel image (initmem) memory: 43484K May 16 00:14:37.901121 kernel: Write protecting the kernel read-only data: 38912k May 16 00:14:37.901129 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 16 00:14:37.901137 kernel: Run /init as init process May 16 00:14:37.901148 kernel: with arguments: May 16 00:14:37.901156 kernel: /init May 16 00:14:37.901164 kernel: with environment: May 16 00:14:37.901172 kernel: HOME=/ May 16 00:14:37.901180 kernel: TERM=linux May 16 00:14:37.901188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:14:37.901197 systemd[1]: Successfully made /usr/ read-only. May 16 00:14:37.901208 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:14:37.901220 systemd[1]: Detected virtualization kvm. May 16 00:14:37.901229 systemd[1]: Detected architecture x86-64. May 16 00:14:37.901238 systemd[1]: Running in initrd. May 16 00:14:37.901246 systemd[1]: No hostname configured, using default hostname. May 16 00:14:37.901255 systemd[1]: Hostname set to . May 16 00:14:37.901264 systemd[1]: Initializing machine ID from VM UUID. May 16 00:14:37.901272 systemd[1]: Queued start job for default target initrd.target. May 16 00:14:37.901281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:14:37.901293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:14:37.901303 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:14:37.901312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:14:37.901321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:14:37.901330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:14:37.901340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:14:37.901352 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:14:37.901361 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:14:37.901370 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:14:37.901378 systemd[1]: Reached target paths.target - Path Units. May 16 00:14:37.901399 systemd[1]: Reached target slices.target - Slice Units. May 16 00:14:37.901408 systemd[1]: Reached target swap.target - Swaps. May 16 00:14:37.901416 systemd[1]: Reached target timers.target - Timer Units. May 16 00:14:37.901425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:14:37.901434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:14:37.901445 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:14:37.901454 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 00:14:37.901463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:14:37.901471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:14:37.901480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:14:37.901489 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:14:37.901497 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:14:37.901506 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:14:37.901515 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:14:37.901526 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:14:37.901535 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:14:37.901543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:14:37.901552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:37.901560 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:14:37.901569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:14:37.901581 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:14:37.901590 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:14:37.901619 systemd-journald[193]: Collecting audit messages is disabled. May 16 00:14:37.901642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:37.901651 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:14:37.901660 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:14:37.901678 systemd-journald[193]: Journal started May 16 00:14:37.901696 systemd-journald[193]: Runtime Journal (/run/log/journal/a10bb9433e0f4a5fad8956b68ab7aec1) is 6M, max 48.2M, 42.2M free. May 16 00:14:37.901735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:14:37.893726 systemd-modules-load[194]: Inserted module 'overlay' May 16 00:14:37.906372 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:14:37.909248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:14:37.918411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:14:37.921680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:14:37.924562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:14:37.928531 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:14:37.929683 systemd-modules-load[194]: Inserted module 'br_netfilter' May 16 00:14:37.930643 kernel: Bridge firewalling registered May 16 00:14:37.944554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:14:37.945842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:14:37.949430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:14:37.957286 dracut-cmdline[223]: dracut-dracut-053 May 16 00:14:37.960331 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:14:37.967699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:14:37.974552 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:14:38.010635 systemd-resolved[249]: Positive Trust Anchors: May 16 00:14:38.010650 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:14:38.010688 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:14:38.021292 systemd-resolved[249]: Defaulting to hostname 'linux'. May 16 00:14:38.023161 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:14:38.023282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:14:38.047418 kernel: SCSI subsystem initialized May 16 00:14:38.056410 kernel: Loading iSCSI transport class v2.0-870. May 16 00:14:38.066412 kernel: iscsi: registered transport (tcp) May 16 00:14:38.087575 kernel: iscsi: registered transport (qla4xxx) May 16 00:14:38.087616 kernel: QLogic iSCSI HBA Driver May 16 00:14:38.139570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:14:38.153519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:14:38.178097 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:14:38.178131 kernel: device-mapper: uevent: version 1.0.3 May 16 00:14:38.178142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:14:38.220410 kernel: raid6: avx2x4 gen() 25161 MB/s May 16 00:14:38.237408 kernel: raid6: avx2x2 gen() 27720 MB/s May 16 00:14:38.254767 kernel: raid6: avx2x1 gen() 23641 MB/s May 16 00:14:38.254782 kernel: raid6: using algorithm avx2x2 gen() 27720 MB/s May 16 00:14:38.272507 kernel: raid6: .... xor() 16966 MB/s, rmw enabled May 16 00:14:38.272522 kernel: raid6: using avx2x2 recovery algorithm May 16 00:14:38.294412 kernel: xor: automatically using best checksumming function avx May 16 00:14:38.446412 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:14:38.459921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:14:38.478571 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:14:38.495495 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 16 00:14:38.501994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:14:38.514532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:14:38.527477 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 16 00:14:38.558803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:14:38.566530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:14:38.631694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:14:38.639541 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:14:38.651339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:14:38.654364 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:14:38.657176 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:14:38.659602 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:14:38.663404 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:14:38.667478 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:14:38.675542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:14:38.675557 kernel: GPT:9289727 != 19775487 May 16 00:14:38.675569 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:14:38.675586 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:14:38.675597 kernel: GPT:9289727 != 19775487 May 16 00:14:38.675607 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:14:38.675617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:14:38.667515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:14:38.679424 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:14:38.696629 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:14:38.696679 kernel: AES CTR mode by8 optimization enabled May 16 00:14:38.696691 kernel: libata version 3.00 loaded. May 16 00:14:38.703401 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:14:38.704771 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:14:38.704793 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:14:38.706410 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:14:38.708563 kernel: scsi host0: ahci May 16 00:14:38.708821 kernel: scsi host1: ahci May 16 00:14:38.709013 kernel: scsi host2: ahci May 16 00:14:38.712541 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:14:38.713716 kernel: scsi host3: ahci May 16 00:14:38.712781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:14:38.725599 kernel: scsi host4: ahci May 16 00:14:38.725782 kernel: scsi host5: ahci May 16 00:14:38.726044 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 16 00:14:38.726057 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 16 00:14:38.726067 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 16 00:14:38.726077 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 16 00:14:38.726091 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 16 00:14:38.726102 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 16 00:14:38.725707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:14:38.728517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:14:38.730605 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) May 16 00:14:38.730757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:38.732436 kernel: BTRFS: device fsid 7e35ecc6-4b22-44da-ae37-cf2eabf14492 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) May 16 00:14:38.735449 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:38.747570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:38.762461 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:38.782254 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:14:38.793275 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:14:38.811289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:14:38.821042 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:14:38.823599 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:14:38.839496 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:14:38.841846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:14:38.841901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:38.845435 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:38.848256 disk-uuid[553]: Primary Header is updated. May 16 00:14:38.848256 disk-uuid[553]: Secondary Entries is updated. May 16 00:14:38.848256 disk-uuid[553]: Secondary Header is updated. May 16 00:14:38.850035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:38.853713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:14:38.856406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:14:38.867909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:38.875513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:14:38.899394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:14:39.033470 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:14:39.033509 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:14:39.033521 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:14:39.034414 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:14:39.034474 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:14:39.035408 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:14:39.036413 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:14:39.036429 kernel: ata3.00: applying bridge limits May 16 00:14:39.037415 kernel: ata3.00: configured for UDMA/100 May 16 00:14:39.039404 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:14:39.081994 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:14:39.082209 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:14:39.094446 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:14:39.859146 disk-uuid[555]: The operation has completed successfully. May 16 00:14:39.860527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:14:39.896895 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:14:39.897060 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:14:39.945501 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:14:39.948942 sh[596]: Success May 16 00:14:39.962409 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:14:39.994243 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:14:40.015805 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:14:40.019183 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:14:40.033021 kernel: BTRFS info (device dm-0): first mount of filesystem 7e35ecc6-4b22-44da-ae37-cf2eabf14492 May 16 00:14:40.033047 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:14:40.033058 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:14:40.034034 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:14:40.034768 kernel: BTRFS info (device dm-0): using free space tree May 16 00:14:40.039337 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:14:40.041554 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:14:40.050518 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:14:40.052594 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:14:40.069455 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:14:40.069490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:14:40.069501 kernel: BTRFS info (device vda6): using free space tree May 16 00:14:40.072418 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:14:40.076407 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:14:40.082574 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:14:40.089647 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:14:40.136014 ignition[683]: Ignition 2.20.0 May 16 00:14:40.136025 ignition[683]: Stage: fetch-offline May 16 00:14:40.136061 ignition[683]: no configs at "/usr/lib/ignition/base.d" May 16 00:14:40.136071 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:40.136162 ignition[683]: parsed url from cmdline: "" May 16 00:14:40.136166 ignition[683]: no config URL provided May 16 00:14:40.136171 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:14:40.136180 ignition[683]: no config at "/usr/lib/ignition/user.ign" May 16 00:14:40.136205 ignition[683]: op(1): [started] loading QEMU firmware config module May 16 00:14:40.136210 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:14:40.147077 ignition[683]: op(1): [finished] loading QEMU firmware config module May 16 00:14:40.172935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:14:40.187314 ignition[683]: parsing config with SHA512: fec27fc98df72205d096f2ac5f0bcf8a5e3e4d067406a993d637179e96913ac91effa8706f7bb2416380b1adbbfbc3ed840f5bd627261457d903007201fc3541 May 16 00:14:40.188549 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:14:40.194369 unknown[683]: fetched base config from "system" May 16 00:14:40.194381 unknown[683]: fetched user config from "qemu" May 16 00:14:40.196266 ignition[683]: fetch-offline: fetch-offline passed May 16 00:14:40.197127 ignition[683]: Ignition finished successfully May 16 00:14:40.199567 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:14:40.219208 systemd-networkd[781]: lo: Link UP May 16 00:14:40.219218 systemd-networkd[781]: lo: Gained carrier May 16 00:14:40.220911 systemd-networkd[781]: Enumeration completed May 16 00:14:40.221265 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:14:40.221270 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:14:40.221987 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:14:40.222251 systemd-networkd[781]: eth0: Link UP May 16 00:14:40.222255 systemd-networkd[781]: eth0: Gained carrier May 16 00:14:40.222261 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:14:40.225309 systemd[1]: Reached target network.target - Network. May 16 00:14:40.232237 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:14:40.241523 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:14:40.250436 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:14:40.256783 ignition[785]: Ignition 2.20.0 May 16 00:14:40.256793 ignition[785]: Stage: kargs May 16 00:14:40.256948 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 16 00:14:40.256959 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:40.258971 ignition[785]: kargs: kargs passed May 16 00:14:40.259014 ignition[785]: Ignition finished successfully May 16 00:14:40.263885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:14:40.277546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:14:40.291007 ignition[795]: Ignition 2.20.0 May 16 00:14:40.291019 ignition[795]: Stage: disks May 16 00:14:40.291178 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 16 00:14:40.291190 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:40.292165 ignition[795]: disks: disks passed May 16 00:14:40.292204 ignition[795]: Ignition finished successfully May 16 00:14:40.295557 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:14:40.297238 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:14:40.299126 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:14:40.299185 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:14:40.299534 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:14:40.299851 systemd[1]: Reached target basic.target - Basic System. May 16 00:14:40.316501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:14:40.328980 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:14:40.334950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:14:40.347473 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:14:40.430478 kernel: EXT4-fs (vda9): mounted filesystem 14ea3086-9247-48be-9c0b-44ef9d324f10 r/w with ordered data mode. Quota mode: none. May 16 00:14:40.430694 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:14:40.432792 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:14:40.443444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:14:40.445996 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:14:40.448604 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:14:40.455967 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) May 16 00:14:40.455990 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:14:40.456001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:14:40.456012 kernel: BTRFS info (device vda6): using free space tree May 16 00:14:40.456023 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:14:40.448652 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:14:40.448675 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:14:40.468513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:14:40.473136 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:14:40.474039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:14:40.508068 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:14:40.513001 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 16 00:14:40.517780 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:14:40.521668 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:14:40.601279 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:14:40.610483 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:14:40.612026 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:14:40.618423 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:14:40.633787 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:14:40.635709 ignition[926]: INFO : Ignition 2.20.0 May 16 00:14:40.635709 ignition[926]: INFO : Stage: mount May 16 00:14:40.635709 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:14:40.635709 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:40.639292 ignition[926]: INFO : mount: mount passed May 16 00:14:40.639292 ignition[926]: INFO : Ignition finished successfully May 16 00:14:40.640792 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:14:40.651467 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:14:41.032474 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:14:41.045598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:14:41.052410 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) May 16 00:14:41.054454 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:14:41.054496 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:14:41.054508 kernel: BTRFS info (device vda6): using free space tree May 16 00:14:41.057407 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:14:41.058776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:14:41.076466 ignition[957]: INFO : Ignition 2.20.0 May 16 00:14:41.076466 ignition[957]: INFO : Stage: files May 16 00:14:41.078218 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:14:41.078218 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:41.078218 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 16 00:14:41.081792 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:14:41.081792 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:14:41.086018 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:14:41.087474 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:14:41.088910 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:14:41.087896 unknown[957]: wrote ssh authorized keys file for user: core May 16 00:14:41.091444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:14:41.091444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 00:14:41.128624 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:14:41.402084 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:14:41.402084 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:14:41.406700 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 00:14:41.456508 systemd-networkd[781]: eth0: Gained IPv6LL May 16 00:14:41.737415 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:14:41.840327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:14:41.842195 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 00:14:42.447317 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:14:42.839610 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:14:42.839610 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:14:42.843375 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:14:42.869809 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:14:42.873835 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:14:42.875779 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:14:42.875779 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:14:42.875779 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:14:42.875779 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:14:42.875779 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:14:42.875779 ignition[957]: INFO : files: files passed May 16 00:14:42.875779 ignition[957]: INFO : Ignition finished successfully May 16 00:14:42.888479 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:14:42.897624 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:14:42.900912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:14:42.902713 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:14:42.902837 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:14:42.927788 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:14:42.932264 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:14:42.932264 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:14:42.936350 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:14:42.940114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:14:42.943580 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:14:42.955700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:14:42.980617 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:14:42.981902 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:14:42.985831 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:14:42.988315 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:14:42.990863 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:14:42.993712 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:14:43.011431 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:14:43.025648 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:14:43.039121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:14:43.041623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:14:43.044072 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:14:43.046010 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:14:43.047080 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:14:43.049699 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:14:43.051845 systemd[1]: Stopped target basic.target - Basic System. May 16 00:14:43.053771 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:14:43.056018 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:14:43.058447 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:14:43.060736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:14:43.062865 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:14:43.065519 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:14:43.068076 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:14:43.070608 systemd[1]: Stopped target swap.target - Swaps. May 16 00:14:43.072609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:14:43.073880 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:14:43.076691 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:14:43.079410 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:14:43.082271 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:14:43.083422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:14:43.086610 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:14:43.087919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:14:43.090331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:14:43.091429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:14:43.093922 systemd[1]: Stopped target paths.target - Path Units. May 16 00:14:43.095697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:14:43.099457 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:14:43.102191 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:14:43.104037 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:14:43.105931 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:14:43.106821 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:14:43.109091 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:14:43.110037 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:14:43.112137 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:14:43.113335 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:14:43.116164 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:14:43.117212 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:14:43.132538 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:14:43.134428 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:14:43.135482 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:14:43.138824 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:14:43.139751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:14:43.146763 ignition[1012]: INFO : Ignition 2.20.0 May 16 00:14:43.146763 ignition[1012]: INFO : Stage: umount May 16 00:14:43.146763 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:14:43.146763 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:14:43.146763 ignition[1012]: INFO : umount: umount passed May 16 00:14:43.146763 ignition[1012]: INFO : Ignition finished successfully May 16 00:14:43.139889 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:14:43.141549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:14:43.141662 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:14:43.145887 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:14:43.145999 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:14:43.148113 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:14:43.148217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:14:43.150729 systemd[1]: Stopped target network.target - Network. May 16 00:14:43.153425 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:14:43.153477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:14:43.156011 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:14:43.156060 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:14:43.158076 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:14:43.158123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:14:43.159091 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:14:43.159135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:14:43.159533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:14:43.160069 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:14:43.177661 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:14:43.185500 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:14:43.189400 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 00:14:43.189704 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:14:43.189827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:14:43.193755 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 00:14:43.194995 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:14:43.195052 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:14:43.209464 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:14:43.210445 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:14:43.210500 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:14:43.211930 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:14:43.211978 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:14:43.213221 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:14:43.213268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:14:43.215527 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:14:43.215577 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:14:43.220140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:14:43.223906 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:14:43.224010 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:14:43.224066 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 00:14:43.232775 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:14:43.232898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:14:43.243181 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:14:43.243363 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:14:43.244491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:14:43.244548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:14:43.246864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:14:43.246913 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:14:43.247140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:14:43.247192 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:14:43.248018 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:14:43.248065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:14:43.255143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:14:43.255199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:14:43.265635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:14:43.267778 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:14:43.267859 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:14:43.271311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:14:43.271370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:43.275335 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:14:43.275422 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:14:43.275790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:14:43.275899 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:14:43.464762 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:14:43.464910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:14:43.466123 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:14:43.468637 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:14:43.468692 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:14:43.481506 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:14:43.490118 systemd[1]: Switching root. May 16 00:14:43.526897 systemd-journald[193]: Journal stopped May 16 00:14:44.907031 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 16 00:14:44.907103 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:14:44.907117 kernel: SELinux: policy capability open_perms=1 May 16 00:14:44.907132 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:14:44.907143 kernel: SELinux: policy capability always_check_network=0 May 16 00:14:44.907154 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:14:44.907166 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:14:44.907178 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:14:44.907189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:14:44.907206 kernel: audit: type=1403 audit(1747354484.114:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:14:44.907219 systemd[1]: Successfully loaded SELinux policy in 44.527ms. May 16 00:14:44.907257 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.478ms. May 16 00:14:44.907273 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:14:44.907285 systemd[1]: Detected virtualization kvm. May 16 00:14:44.907298 systemd[1]: Detected architecture x86-64. May 16 00:14:44.907310 systemd[1]: Detected first boot. May 16 00:14:44.907322 systemd[1]: Initializing machine ID from VM UUID. May 16 00:14:44.907334 zram_generator::config[1059]: No configuration found. May 16 00:14:44.907353 kernel: Guest personality initialized and is inactive May 16 00:14:44.907366 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 00:14:44.907379 kernel: Initialized host personality May 16 00:14:44.907490 kernel: NET: Registered PF_VSOCK protocol family May 16 00:14:44.907503 systemd[1]: Populated /etc with preset unit settings. May 16 00:14:44.907518 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 00:14:44.907537 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:14:44.907549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:14:44.907561 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:14:44.907573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:14:44.907585 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:14:44.907601 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:14:44.907613 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:14:44.907626 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:14:44.907638 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:14:44.907651 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:14:44.907663 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:14:44.907675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:14:44.907688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:14:44.907700 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:14:44.907715 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:14:44.907727 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:14:44.907741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:14:44.907754 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:14:44.907766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:14:44.907778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:14:44.907790 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:14:44.907805 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:14:44.907817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:14:44.907830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:14:44.907842 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:14:44.907854 systemd[1]: Reached target slices.target - Slice Units. May 16 00:14:44.907866 systemd[1]: Reached target swap.target - Swaps. May 16 00:14:44.907878 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:14:44.907890 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:14:44.907902 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 00:14:44.907917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:14:44.907929 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:14:44.907941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:14:44.907953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:14:44.907965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:14:44.907977 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:14:44.907989 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:14:44.908003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:44.908015 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:14:44.908030 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:14:44.908042 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:14:44.908055 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:14:44.908067 systemd[1]: Reached target machines.target - Containers. May 16 00:14:44.908079 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:14:44.908092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:14:44.908104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:14:44.908116 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:14:44.908130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:14:44.908143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:14:44.908155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:14:44.908167 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:14:44.908179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:14:44.908192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:14:44.908205 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:14:44.908216 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:14:44.908228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:14:44.908243 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:14:44.908256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:14:44.908269 kernel: fuse: init (API version 7.39) May 16 00:14:44.908280 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:14:44.908292 kernel: loop: module loaded May 16 00:14:44.908304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:14:44.908316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:14:44.908328 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:14:44.908344 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 00:14:44.908356 kernel: ACPI: bus type drm_connector registered May 16 00:14:44.908397 systemd-journald[1130]: Collecting audit messages is disabled. May 16 00:14:44.908421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:14:44.908437 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:14:44.908449 systemd-journald[1130]: Journal started May 16 00:14:44.908471 systemd-journald[1130]: Runtime Journal (/run/log/journal/a10bb9433e0f4a5fad8956b68ab7aec1) is 6M, max 48.2M, 42.2M free. May 16 00:14:44.669014 systemd[1]: Queued start job for default target multi-user.target. May 16 00:14:44.686652 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:14:44.687176 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:14:44.909943 systemd[1]: Stopped verity-setup.service. May 16 00:14:44.912419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:44.917619 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:14:44.918444 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:14:44.919681 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:14:44.920921 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:14:44.922052 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:14:44.923315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:14:44.924575 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:14:44.925898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:14:44.927436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:14:44.929084 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:14:44.929311 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:14:44.930981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:14:44.931209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:14:44.932679 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:14:44.932897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:14:44.934431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:14:44.934654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:14:44.936305 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:14:44.936549 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:14:44.938144 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:14:44.938355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:14:44.939842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:14:44.941439 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:14:44.943150 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:14:44.944760 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 00:14:44.960208 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:14:44.969505 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:14:44.972036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:14:44.973275 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:14:44.973366 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:14:44.975602 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 00:14:44.978183 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:14:44.980749 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:14:44.982710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:14:44.985959 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:14:44.989106 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:14:44.991285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:14:44.992721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:14:44.994051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:14:45.002306 systemd-journald[1130]: Time spent on flushing to /var/log/journal/a10bb9433e0f4a5fad8956b68ab7aec1 is 16.119ms for 1057 entries. May 16 00:14:45.002306 systemd-journald[1130]: System Journal (/var/log/journal/a10bb9433e0f4a5fad8956b68ab7aec1) is 8M, max 195.6M, 187.6M free. May 16 00:14:45.029186 systemd-journald[1130]: Received client request to flush runtime journal. May 16 00:14:44.999451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:14:45.004124 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:14:45.007752 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:14:45.012731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:14:45.014716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:14:45.016613 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:14:45.018381 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:14:45.020166 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:14:45.025642 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:14:45.044690 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 00:14:45.051833 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:14:45.056481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:14:45.059309 kernel: loop0: detected capacity change from 0 to 224512 May 16 00:14:45.060964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:14:45.070173 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 00:14:45.072264 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:14:45.079437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:14:45.082669 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:14:45.092567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:14:45.107431 kernel: loop1: detected capacity change from 0 to 147912 May 16 00:14:45.118745 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 16 00:14:45.118765 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 16 00:14:45.125620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:14:45.157417 kernel: loop2: detected capacity change from 0 to 138176 May 16 00:14:45.188475 kernel: loop3: detected capacity change from 0 to 224512 May 16 00:14:45.198417 kernel: loop4: detected capacity change from 0 to 147912 May 16 00:14:45.214411 kernel: loop5: detected capacity change from 0 to 138176 May 16 00:14:45.227522 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:14:45.228853 (sd-merge)[1203]: Merged extensions into '/usr'. May 16 00:14:45.233087 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:14:45.233100 systemd[1]: Reloading... May 16 00:14:45.298408 zram_generator::config[1233]: No configuration found. May 16 00:14:45.302488 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:14:45.423156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:14:45.498483 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:14:45.499196 systemd[1]: Reloading finished in 265 ms. May 16 00:14:45.520284 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:14:45.522117 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:14:45.536016 systemd[1]: Starting ensure-sysext.service... May 16 00:14:45.538139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:14:45.550912 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 16 00:14:45.550927 systemd[1]: Reloading... May 16 00:14:45.568028 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:14:45.568312 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:14:45.569264 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:14:45.569571 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 00:14:45.569655 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 00:14:45.574007 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:14:45.574020 systemd-tmpfiles[1269]: Skipping /boot May 16 00:14:45.592480 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:14:45.592663 systemd-tmpfiles[1269]: Skipping /boot May 16 00:14:45.608595 zram_generator::config[1298]: No configuration found. May 16 00:14:45.749582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:14:45.829039 systemd[1]: Reloading finished in 277 ms. May 16 00:14:45.845454 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:14:45.874257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:14:45.894721 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:14:45.897427 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:14:45.899801 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:14:45.905705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:14:45.908444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:14:45.911407 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:14:45.915790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:45.915961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:14:45.917243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:14:45.920099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:14:45.923619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:14:45.926624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:14:45.926785 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:14:45.931308 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:14:45.932358 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:45.933745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:14:45.933968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:14:45.935969 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:14:45.936184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:14:45.940559 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:14:45.942376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:14:45.942630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:14:45.950025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:14:45.950573 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:14:45.959734 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:14:45.962863 systemd-udevd[1347]: Using default interface naming scheme 'v255'. May 16 00:14:45.963686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:45.963898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:14:45.974679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:14:45.977716 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:14:45.982685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:14:45.983781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:14:45.983897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:14:45.983997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:45.985354 augenrules[1373]: No rules May 16 00:14:45.985126 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:14:45.988140 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:14:45.988472 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:14:45.991002 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:14:45.994714 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:14:45.996885 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:14:45.998686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:14:45.998921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:14:46.000542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:14:46.002603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:14:46.002823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:14:46.005926 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:14:46.006136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:14:46.025150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:46.031950 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:14:46.033669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:14:46.038328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:14:46.041988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:14:46.044592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:14:46.049802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:14:46.050995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:14:46.051034 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:14:46.061431 augenrules[1406]: /sbin/augenrules: No change May 16 00:14:46.054614 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:14:46.055700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:14:46.055725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:14:46.058174 systemd[1]: Finished ensure-sysext.service. May 16 00:14:46.059468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:14:46.059679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:14:46.061362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:14:46.061755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:14:46.065167 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:14:46.065409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:14:46.066959 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:14:46.067187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:14:46.073568 augenrules[1433]: No rules May 16 00:14:46.075453 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:14:46.075731 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:14:46.081147 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:14:46.081646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:14:46.084466 systemd-resolved[1343]: Positive Trust Anchors: May 16 00:14:46.084488 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:14:46.084524 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:14:46.089939 systemd-resolved[1343]: Defaulting to hostname 'linux'. May 16 00:14:46.091344 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:14:46.101410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1402) May 16 00:14:46.119099 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:14:46.120511 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:14:46.137416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:14:46.135152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:14:46.149085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:14:46.149302 systemd-networkd[1424]: lo: Link UP May 16 00:14:46.149632 kernel: ACPI: button: Power Button [PWRF] May 16 00:14:46.149318 systemd-networkd[1424]: lo: Gained carrier May 16 00:14:46.151101 systemd-networkd[1424]: Enumeration completed May 16 00:14:46.151526 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:14:46.151537 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:14:46.152120 systemd-networkd[1424]: eth0: Link UP May 16 00:14:46.152124 systemd-networkd[1424]: eth0: Gained carrier May 16 00:14:46.152137 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:14:46.157581 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:14:46.159073 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:14:46.160512 systemd[1]: Reached target network.target - Network. May 16 00:14:46.165540 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 00:14:46.166528 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:14:46.170827 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:14:46.171413 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:14:46.184433 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 00:14:46.184740 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:14:46.184923 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:14:46.185115 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:14:46.185824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:14:46.191672 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 00:14:46.211847 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:14:47.082688 systemd-resolved[1343]: Clock change detected. Flushing caches. May 16 00:14:47.082825 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:14:47.083103 systemd-timesyncd[1446]: Initial clock synchronization to Fri 2025-05-16 00:14:47.082638 UTC. May 16 00:14:47.083363 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:14:47.100569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:47.139138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:14:47.139827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:47.150305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:14:47.156436 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:14:47.169302 kernel: kvm_amd: TSC scaling supported May 16 00:14:47.169350 kernel: kvm_amd: Nested Virtualization enabled May 16 00:14:47.169364 kernel: kvm_amd: Nested Paging enabled May 16 00:14:47.169376 kernel: kvm_amd: LBR virtualization supported May 16 00:14:47.170397 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:14:47.170438 kernel: kvm_amd: Virtual GIF supported May 16 00:14:47.196255 kernel: EDAC MC: Ver: 3.0.0 May 16 00:14:47.218798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:14:47.229502 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:14:47.238441 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:14:47.247181 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:14:47.283683 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:14:47.285515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:14:47.286715 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:14:47.287984 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:14:47.289408 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:14:47.291130 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:14:47.292500 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:14:47.293914 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:14:47.295408 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:14:47.295449 systemd[1]: Reached target paths.target - Path Units. May 16 00:14:47.296575 systemd[1]: Reached target timers.target - Timer Units. May 16 00:14:47.298848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:14:47.302028 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:14:47.305938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 00:14:47.307538 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 00:14:47.308850 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 00:14:47.312872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:14:47.314365 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 00:14:47.316783 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:14:47.318505 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:14:47.319685 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:14:47.320676 systemd[1]: Reached target basic.target - Basic System. May 16 00:14:47.321723 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:14:47.321753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:14:47.322733 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:14:47.324849 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:14:47.327310 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:14:47.329338 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:14:47.332355 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:14:47.333534 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:14:47.335296 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:14:47.335368 jq[1482]: false May 16 00:14:47.341153 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:14:47.344420 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:14:47.347441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:14:47.354515 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:14:47.355041 dbus-daemon[1481]: [system] SELinux support is enabled May 16 00:14:47.356363 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:14:47.356884 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:14:47.358409 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:14:47.361374 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:14:47.361773 extend-filesystems[1483]: Found loop3 May 16 00:14:47.361773 extend-filesystems[1483]: Found loop4 May 16 00:14:47.361773 extend-filesystems[1483]: Found loop5 May 16 00:14:47.361773 extend-filesystems[1483]: Found sr0 May 16 00:14:47.361773 extend-filesystems[1483]: Found vda May 16 00:14:47.361773 extend-filesystems[1483]: Found vda1 May 16 00:14:47.361773 extend-filesystems[1483]: Found vda2 May 16 00:14:47.361773 extend-filesystems[1483]: Found vda3 May 16 00:14:47.370034 extend-filesystems[1483]: Found usr May 16 00:14:47.370034 extend-filesystems[1483]: Found vda4 May 16 00:14:47.370034 extend-filesystems[1483]: Found vda6 May 16 00:14:47.370034 extend-filesystems[1483]: Found vda7 May 16 00:14:47.370034 extend-filesystems[1483]: Found vda9 May 16 00:14:47.370034 extend-filesystems[1483]: Checking size of /dev/vda9 May 16 00:14:47.365449 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:14:47.369867 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:14:47.371960 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:14:47.372321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:14:47.372738 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:14:47.373024 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:14:47.378648 jq[1496]: true May 16 00:14:47.377849 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:14:47.378131 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:14:47.399148 update_engine[1495]: I20250516 00:14:47.397042 1495 main.cc:92] Flatcar Update Engine starting May 16 00:14:47.399040 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:14:47.399064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:14:47.400416 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:14:47.400431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:14:47.402658 extend-filesystems[1483]: Resized partition /dev/vda9 May 16 00:14:47.403642 jq[1504]: true May 16 00:14:47.407597 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:14:47.409156 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) May 16 00:14:47.411112 update_engine[1495]: I20250516 00:14:47.410811 1495 update_check_scheduler.cc:74] Next update check in 10m48s May 16 00:14:47.410645 systemd[1]: Started update-engine.service - Update Engine. May 16 00:14:47.415671 tar[1503]: linux-amd64/LICENSE May 16 00:14:47.415671 tar[1503]: linux-amd64/helm May 16 00:14:47.416464 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:14:47.419236 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:14:47.426234 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1405) May 16 00:14:47.440857 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:14:47.440889 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:14:47.441114 systemd-logind[1494]: New seat seat0. May 16 00:14:47.441881 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:14:47.464241 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:14:47.495073 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:14:47.495073 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:14:47.495073 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:14:47.494527 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:14:47.503006 extend-filesystems[1483]: Resized filesystem in /dev/vda9 May 16 00:14:47.494775 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:14:47.504413 bash[1534]: Updated "/home/core/.ssh/authorized_keys" May 16 00:14:47.505756 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:14:47.508932 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:14:47.511458 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:14:47.615728 containerd[1506]: time="2025-05-16T00:14:47.615650143Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:14:47.645172 containerd[1506]: time="2025-05-16T00:14:47.644847579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.646848 containerd[1506]: time="2025-05-16T00:14:47.646801784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:14:47.646848 containerd[1506]: time="2025-05-16T00:14:47.646843222Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:14:47.646919 containerd[1506]: time="2025-05-16T00:14:47.646863219Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:14:47.647065 containerd[1506]: time="2025-05-16T00:14:47.647046263Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:14:47.647094 containerd[1506]: time="2025-05-16T00:14:47.647066411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647199 containerd[1506]: time="2025-05-16T00:14:47.647135220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:14:47.647199 containerd[1506]: time="2025-05-16T00:14:47.647153925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647678 containerd[1506]: time="2025-05-16T00:14:47.647419583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:14:47.647678 containerd[1506]: time="2025-05-16T00:14:47.647438509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647678 containerd[1506]: time="2025-05-16T00:14:47.647451533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:14:47.647678 containerd[1506]: time="2025-05-16T00:14:47.647461732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647678 containerd[1506]: time="2025-05-16T00:14:47.647553003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647812 containerd[1506]: time="2025-05-16T00:14:47.647786040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:14:47.647966 containerd[1506]: time="2025-05-16T00:14:47.647938316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:14:47.647966 containerd[1506]: time="2025-05-16T00:14:47.647955047Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:14:47.648100 containerd[1506]: time="2025-05-16T00:14:47.648045918Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:14:47.648144 containerd[1506]: time="2025-05-16T00:14:47.648103756Z" level=info msg="metadata content store policy set" policy=shared May 16 00:14:47.654122 containerd[1506]: time="2025-05-16T00:14:47.654093221Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:14:47.654186 containerd[1506]: time="2025-05-16T00:14:47.654136051Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:14:47.654186 containerd[1506]: time="2025-05-16T00:14:47.654152212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:14:47.654186 containerd[1506]: time="2025-05-16T00:14:47.654169484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:14:47.654186 containerd[1506]: time="2025-05-16T00:14:47.654184222Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:14:47.654406 containerd[1506]: time="2025-05-16T00:14:47.654334303Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656705521Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656915014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656930924Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656946292Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656961471Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656985145Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.656997799Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657011455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657025050Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657038105Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657053153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657064184Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657084051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657465 containerd[1506]: time="2025-05-16T00:14:47.657097075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657113867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657126891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657138583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657152058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657164392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657176384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657188507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657203515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657229844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657242107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657263898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657279187Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657299786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657313752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:14:47.657746 containerd[1506]: time="2025-05-16T00:14:47.657324021Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657378323Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657396537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657406686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657418117Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657427736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:14:47.658013 containerd[1506]: time="2025-05-16T00:14:47.657440019Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:14:47.658141 containerd[1506]: time="2025-05-16T00:14:47.657451580Z" level=info msg="NRI interface is disabled by configuration." May 16 00:14:47.658190 containerd[1506]: time="2025-05-16T00:14:47.658177662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:14:47.658558 containerd[1506]: time="2025-05-16T00:14:47.658518862Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:14:47.658725 containerd[1506]: time="2025-05-16T00:14:47.658713197Z" level=info msg="Connect containerd service" May 16 00:14:47.658802 containerd[1506]: time="2025-05-16T00:14:47.658789360Z" level=info msg="using legacy CRI server" May 16 00:14:47.658844 containerd[1506]: time="2025-05-16T00:14:47.658834264Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:14:47.658980 containerd[1506]: time="2025-05-16T00:14:47.658965810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:14:47.659676 containerd[1506]: time="2025-05-16T00:14:47.659654963Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:14:47.659863 containerd[1506]: time="2025-05-16T00:14:47.659836263Z" level=info msg="Start subscribing containerd event" May 16 00:14:47.659936 containerd[1506]: time="2025-05-16T00:14:47.659925000Z" level=info msg="Start recovering state" May 16 00:14:47.660154 containerd[1506]: time="2025-05-16T00:14:47.660140414Z" level=info msg="Start event monitor" May 16 00:14:47.660211 containerd[1506]: time="2025-05-16T00:14:47.660200456Z" level=info msg="Start snapshots syncer" May 16 00:14:47.660292 containerd[1506]: time="2025-05-16T00:14:47.660279996Z" level=info msg="Start cni network conf syncer for default" May 16 00:14:47.660364 containerd[1506]: time="2025-05-16T00:14:47.660326132Z" level=info msg="Start streaming server" May 16 00:14:47.660747 containerd[1506]: time="2025-05-16T00:14:47.660705664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:14:47.660855 containerd[1506]: time="2025-05-16T00:14:47.660827412Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:14:47.660964 containerd[1506]: time="2025-05-16T00:14:47.660949211Z" level=info msg="containerd successfully booted in 0.046391s" May 16 00:14:47.661109 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:14:47.727337 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:14:47.752239 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:14:47.759464 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:14:47.768614 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:14:47.768904 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:14:47.771984 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:14:47.788007 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:14:47.796492 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:14:47.798764 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:14:47.800125 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:14:47.871149 tar[1503]: linux-amd64/README.md May 16 00:14:47.889709 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:14:48.534481 systemd-networkd[1424]: eth0: Gained IPv6LL May 16 00:14:48.537498 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:14:48.539305 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:14:48.551517 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:14:48.554021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:14:48.556227 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:14:48.574539 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:14:48.574822 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:14:48.576608 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:14:48.579659 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:14:49.270064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:14:49.271720 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:14:49.273240 systemd[1]: Startup finished in 746ms (kernel) + 6.410s (initrd) + 4.331s (userspace) = 11.487s. May 16 00:14:49.275864 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:14:49.671944 kubelet[1594]: E0516 00:14:49.671813 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:14:49.675925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:14:49.676128 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:14:49.676547 systemd[1]: kubelet.service: Consumed 991ms CPU time, 268.6M memory peak. May 16 00:14:52.051815 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:14:52.053284 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:47222.service - OpenSSH per-connection server daemon (10.0.0.1:47222). May 16 00:14:52.098808 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 47222 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:52.100708 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:52.107435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:14:52.117517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:14:52.123772 systemd-logind[1494]: New session 1 of user core. May 16 00:14:52.129267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:14:52.140531 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:14:52.143446 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:14:52.145830 systemd-logind[1494]: New session c1 of user core. May 16 00:14:52.296814 systemd[1611]: Queued start job for default target default.target. May 16 00:14:52.316521 systemd[1611]: Created slice app.slice - User Application Slice. May 16 00:14:52.316544 systemd[1611]: Reached target paths.target - Paths. May 16 00:14:52.316587 systemd[1611]: Reached target timers.target - Timers. May 16 00:14:52.318159 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:14:52.329059 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:14:52.329187 systemd[1611]: Reached target sockets.target - Sockets. May 16 00:14:52.329248 systemd[1611]: Reached target basic.target - Basic System. May 16 00:14:52.329293 systemd[1611]: Reached target default.target - Main User Target. May 16 00:14:52.329324 systemd[1611]: Startup finished in 177ms. May 16 00:14:52.329842 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:14:52.331608 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:14:52.408572 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:47232.service - OpenSSH per-connection server daemon (10.0.0.1:47232). May 16 00:14:52.439971 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 47232 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:52.441426 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:52.445613 systemd-logind[1494]: New session 2 of user core. May 16 00:14:52.455389 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:14:52.508298 sshd[1624]: Connection closed by 10.0.0.1 port 47232 May 16 00:14:52.508638 sshd-session[1622]: pam_unix(sshd:session): session closed for user core May 16 00:14:52.519082 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:47232.service: Deactivated successfully. May 16 00:14:52.520935 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:14:52.522620 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. May 16 00:14:52.542530 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:47238.service - OpenSSH per-connection server daemon (10.0.0.1:47238). May 16 00:14:52.543824 systemd-logind[1494]: Removed session 2. May 16 00:14:52.578038 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 47238 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:52.579785 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:52.584421 systemd-logind[1494]: New session 3 of user core. May 16 00:14:52.598343 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:14:52.648405 sshd[1632]: Connection closed by 10.0.0.1 port 47238 May 16 00:14:52.648845 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 16 00:14:52.666724 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:47238.service: Deactivated successfully. May 16 00:14:52.668990 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:14:52.670986 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. May 16 00:14:52.681621 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:47240.service - OpenSSH per-connection server daemon (10.0.0.1:47240). May 16 00:14:52.682643 systemd-logind[1494]: Removed session 3. May 16 00:14:52.714832 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 47240 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:52.716403 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:52.720646 systemd-logind[1494]: New session 4 of user core. May 16 00:14:52.730357 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:14:52.783924 sshd[1640]: Connection closed by 10.0.0.1 port 47240 May 16 00:14:52.784381 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 16 00:14:52.810261 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:47240.service: Deactivated successfully. May 16 00:14:52.812440 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:14:52.814578 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. May 16 00:14:52.829671 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:47246.service - OpenSSH per-connection server daemon (10.0.0.1:47246). May 16 00:14:52.830969 systemd-logind[1494]: Removed session 4. May 16 00:14:52.861908 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 47246 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:52.863634 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:52.868079 systemd-logind[1494]: New session 5 of user core. May 16 00:14:52.878327 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:14:52.936741 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:14:52.937072 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:14:52.953457 sudo[1649]: pam_unix(sudo:session): session closed for user root May 16 00:14:52.954989 sshd[1648]: Connection closed by 10.0.0.1 port 47246 May 16 00:14:52.955401 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 16 00:14:52.974627 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:47246.service: Deactivated successfully. May 16 00:14:52.976888 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:14:52.978866 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. May 16 00:14:52.995535 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:47262.service - OpenSSH per-connection server daemon (10.0.0.1:47262). May 16 00:14:52.996555 systemd-logind[1494]: Removed session 5. May 16 00:14:53.026628 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 47262 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:53.028182 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:53.032667 systemd-logind[1494]: New session 6 of user core. May 16 00:14:53.046344 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:14:53.102180 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:14:53.102520 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:14:53.106622 sudo[1659]: pam_unix(sudo:session): session closed for user root May 16 00:14:53.113122 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:14:53.113520 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:14:53.133481 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:14:53.164197 augenrules[1681]: No rules May 16 00:14:53.165887 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:14:53.166156 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:14:53.167365 sudo[1658]: pam_unix(sudo:session): session closed for user root May 16 00:14:53.168935 sshd[1657]: Connection closed by 10.0.0.1 port 47262 May 16 00:14:53.169370 sshd-session[1654]: pam_unix(sshd:session): session closed for user core May 16 00:14:53.182180 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:47262.service: Deactivated successfully. May 16 00:14:53.184123 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:14:53.185776 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. May 16 00:14:53.201471 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:47266.service - OpenSSH per-connection server daemon (10.0.0.1:47266). May 16 00:14:53.202577 systemd-logind[1494]: Removed session 6. May 16 00:14:53.232013 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 47266 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:14:53.233366 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:53.237421 systemd-logind[1494]: New session 7 of user core. May 16 00:14:53.248346 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:14:53.301602 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:14:53.301913 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:14:53.744607 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:14:53.744715 (dockerd)[1713]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:14:54.301481 dockerd[1713]: time="2025-05-16T00:14:54.301390621Z" level=info msg="Starting up" May 16 00:14:54.850808 dockerd[1713]: time="2025-05-16T00:14:54.850728912Z" level=info msg="Loading containers: start." May 16 00:14:55.049253 kernel: Initializing XFRM netlink socket May 16 00:14:55.144590 systemd-networkd[1424]: docker0: Link UP May 16 00:14:55.180749 dockerd[1713]: time="2025-05-16T00:14:55.180692790Z" level=info msg="Loading containers: done." May 16 00:14:55.239707 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1632021934-merged.mount: Deactivated successfully. May 16 00:14:55.248613 dockerd[1713]: time="2025-05-16T00:14:55.248544922Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:14:55.248769 dockerd[1713]: time="2025-05-16T00:14:55.248695815Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 16 00:14:55.248884 dockerd[1713]: time="2025-05-16T00:14:55.248854302Z" level=info msg="Daemon has completed initialization" May 16 00:14:55.295555 dockerd[1713]: time="2025-05-16T00:14:55.295466513Z" level=info msg="API listen on /run/docker.sock" May 16 00:14:55.295669 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:14:56.207836 containerd[1506]: time="2025-05-16T00:14:56.207779511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:14:56.859036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761225328.mount: Deactivated successfully. May 16 00:14:58.161476 containerd[1506]: time="2025-05-16T00:14:58.161409914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:58.167902 containerd[1506]: time="2025-05-16T00:14:58.167849352Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 00:14:58.169873 containerd[1506]: time="2025-05-16T00:14:58.169835928Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:58.174452 containerd[1506]: time="2025-05-16T00:14:58.174404858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:58.175603 containerd[1506]: time="2025-05-16T00:14:58.175561678Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.967721012s" May 16 00:14:58.175651 containerd[1506]: time="2025-05-16T00:14:58.175602134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 00:14:58.176502 containerd[1506]: time="2025-05-16T00:14:58.176461867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:14:59.713278 containerd[1506]: time="2025-05-16T00:14:59.713174270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:59.713878 containerd[1506]: time="2025-05-16T00:14:59.713798130Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 00:14:59.714971 containerd[1506]: time="2025-05-16T00:14:59.714910787Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:59.717576 containerd[1506]: time="2025-05-16T00:14:59.717551360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:59.718900 containerd[1506]: time="2025-05-16T00:14:59.718866848Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.542370256s" May 16 00:14:59.718957 containerd[1506]: time="2025-05-16T00:14:59.718899078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 00:14:59.719600 containerd[1506]: time="2025-05-16T00:14:59.719560669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:14:59.926530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:14:59.936376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:00.101513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:00.105778 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:15:00.264656 kubelet[1979]: E0516 00:15:00.264541 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:15:00.270835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:15:00.271054 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:15:00.271420 systemd[1]: kubelet.service: Consumed 276ms CPU time, 113.1M memory peak. May 16 00:15:01.891968 containerd[1506]: time="2025-05-16T00:15:01.891900888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:01.892984 containerd[1506]: time="2025-05-16T00:15:01.892935289Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 00:15:01.894336 containerd[1506]: time="2025-05-16T00:15:01.894309086Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:01.898247 containerd[1506]: time="2025-05-16T00:15:01.897188687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:01.900121 containerd[1506]: time="2025-05-16T00:15:01.900066094Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.180448088s" May 16 00:15:01.900175 containerd[1506]: time="2025-05-16T00:15:01.900127760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 00:15:01.900666 containerd[1506]: time="2025-05-16T00:15:01.900645331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:15:02.883694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007757002.mount: Deactivated successfully. May 16 00:15:03.645531 containerd[1506]: time="2025-05-16T00:15:03.645457489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:03.646470 containerd[1506]: time="2025-05-16T00:15:03.646428942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 00:15:03.647987 containerd[1506]: time="2025-05-16T00:15:03.647932522Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:03.650030 containerd[1506]: time="2025-05-16T00:15:03.649993798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:03.650644 containerd[1506]: time="2025-05-16T00:15:03.650591890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.749919709s" May 16 00:15:03.650644 containerd[1506]: time="2025-05-16T00:15:03.650634149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 00:15:03.651262 containerd[1506]: time="2025-05-16T00:15:03.651230738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:15:04.244726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779596404.mount: Deactivated successfully. May 16 00:15:05.600657 containerd[1506]: time="2025-05-16T00:15:05.600598365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:05.601270 containerd[1506]: time="2025-05-16T00:15:05.601207798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:15:05.602322 containerd[1506]: time="2025-05-16T00:15:05.602286361Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:05.604987 containerd[1506]: time="2025-05-16T00:15:05.604934238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:05.606582 containerd[1506]: time="2025-05-16T00:15:05.606418903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.955155463s" May 16 00:15:05.606582 containerd[1506]: time="2025-05-16T00:15:05.606461472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:15:05.608843 containerd[1506]: time="2025-05-16T00:15:05.608805419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:15:06.046019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92346906.mount: Deactivated successfully. May 16 00:15:06.051208 containerd[1506]: time="2025-05-16T00:15:06.051168566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:06.051942 containerd[1506]: time="2025-05-16T00:15:06.051897854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:15:06.053027 containerd[1506]: time="2025-05-16T00:15:06.052990944Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:06.055256 containerd[1506]: time="2025-05-16T00:15:06.055208193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:06.056093 containerd[1506]: time="2025-05-16T00:15:06.056054871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 447.21619ms" May 16 00:15:06.056145 containerd[1506]: time="2025-05-16T00:15:06.056089797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:15:06.056595 containerd[1506]: time="2025-05-16T00:15:06.056556682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:15:07.165999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082667627.mount: Deactivated successfully. May 16 00:15:08.855212 containerd[1506]: time="2025-05-16T00:15:08.855125578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:08.892422 containerd[1506]: time="2025-05-16T00:15:08.892337326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 00:15:08.930388 containerd[1506]: time="2025-05-16T00:15:08.930345849Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:08.996527 containerd[1506]: time="2025-05-16T00:15:08.996494455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:08.997620 containerd[1506]: time="2025-05-16T00:15:08.997592284Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.941005155s" May 16 00:15:08.997620 containerd[1506]: time="2025-05-16T00:15:08.997614757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 00:15:10.521546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:15:10.534424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:10.687542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:10.691430 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:15:10.730535 kubelet[2140]: E0516 00:15:10.730416 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:15:10.734617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:15:10.734833 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:15:10.735245 systemd[1]: kubelet.service: Consumed 201ms CPU time, 114.2M memory peak. May 16 00:15:11.687235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:11.687397 systemd[1]: kubelet.service: Consumed 201ms CPU time, 114.2M memory peak. May 16 00:15:11.702614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:11.729404 systemd[1]: Reload requested from client PID 2155 ('systemctl') (unit session-7.scope)... May 16 00:15:11.729420 systemd[1]: Reloading... May 16 00:15:11.831744 zram_generator::config[2205]: No configuration found. May 16 00:15:13.094990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:15:13.201406 systemd[1]: Reloading finished in 1471 ms. May 16 00:15:13.245785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:13.250535 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:13.251080 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:15:13.251396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:13.251437 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.3M memory peak. May 16 00:15:13.253059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:13.425094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:13.429129 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:15:13.461965 kubelet[2249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:15:13.461965 kubelet[2249]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:15:13.461965 kubelet[2249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:15:13.462442 kubelet[2249]: I0516 00:15:13.462006 2249 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:15:15.065778 kubelet[2249]: I0516 00:15:15.065729 2249 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:15:15.065778 kubelet[2249]: I0516 00:15:15.065762 2249 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:15:15.066187 kubelet[2249]: I0516 00:15:15.066024 2249 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:15:15.086379 kubelet[2249]: E0516 00:15:15.086328 2249 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:15.088134 kubelet[2249]: I0516 00:15:15.088098 2249 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:15:15.092930 kubelet[2249]: E0516 00:15:15.092890 2249 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:15:15.092930 kubelet[2249]: I0516 00:15:15.092923 2249 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:15:15.099261 kubelet[2249]: I0516 00:15:15.098105 2249 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:15:15.099261 kubelet[2249]: I0516 00:15:15.098391 2249 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:15:15.099261 kubelet[2249]: I0516 00:15:15.098414 2249 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:15:15.099691 kubelet[2249]: I0516 00:15:15.099668 2249 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:15:15.099722 kubelet[2249]: I0516 00:15:15.099703 2249 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:15:15.100181 kubelet[2249]: I0516 00:15:15.099866 2249 state_mem.go:36] "Initialized new in-memory state store" May 16 00:15:15.103300 kubelet[2249]: I0516 00:15:15.103280 2249 kubelet.go:446] "Attempting to sync node with API server" May 16 00:15:15.103345 kubelet[2249]: I0516 00:15:15.103313 2249 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:15:15.103345 kubelet[2249]: I0516 00:15:15.103335 2249 kubelet.go:352] "Adding apiserver pod source" May 16 00:15:15.103404 kubelet[2249]: I0516 00:15:15.103349 2249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:15:15.106694 kubelet[2249]: I0516 00:15:15.106289 2249 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:15:15.106694 kubelet[2249]: W0516 00:15:15.106533 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:15.106694 kubelet[2249]: W0516 00:15:15.106557 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:15.106694 kubelet[2249]: E0516 00:15:15.106612 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:15.106694 kubelet[2249]: E0516 00:15:15.106631 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:15.106694 kubelet[2249]: I0516 00:15:15.106662 2249 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:15:15.106920 kubelet[2249]: W0516 00:15:15.106718 2249 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:15:15.108799 kubelet[2249]: I0516 00:15:15.108773 2249 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:15:15.108839 kubelet[2249]: I0516 00:15:15.108814 2249 server.go:1287] "Started kubelet" May 16 00:15:15.111102 kubelet[2249]: I0516 00:15:15.110979 2249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:15:15.112160 kubelet[2249]: I0516 00:15:15.111602 2249 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:15:15.112160 kubelet[2249]: I0516 00:15:15.111597 2249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:15:15.112160 kubelet[2249]: I0516 00:15:15.111843 2249 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:15:15.112160 kubelet[2249]: I0516 00:15:15.111896 2249 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:15:15.112788 kubelet[2249]: I0516 00:15:15.112715 2249 server.go:479] "Adding debug handlers to kubelet server" May 16 00:15:15.115362 kubelet[2249]: E0516 00:15:15.115028 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:15.115362 kubelet[2249]: I0516 00:15:15.115080 2249 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:15:15.115362 kubelet[2249]: I0516 00:15:15.115349 2249 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:15:15.115457 kubelet[2249]: I0516 00:15:15.115391 2249 reconciler.go:26] "Reconciler: start to sync state" May 16 00:15:15.115735 kubelet[2249]: I0516 00:15:15.115708 2249 factory.go:221] Registration of the systemd container factory successfully May 16 00:15:15.115838 kubelet[2249]: W0516 00:15:15.115785 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:15.115838 kubelet[2249]: E0516 00:15:15.115825 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:15.115838 kubelet[2249]: I0516 00:15:15.115835 2249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:15:15.116990 kubelet[2249]: E0516 00:15:15.116043 2249 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:15:15.116990 kubelet[2249]: E0516 00:15:15.116316 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" May 16 00:15:15.116990 kubelet[2249]: E0516 00:15:15.115878 2249 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd9ac71bbff45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:15:15.108790085 +0000 UTC m=+1.676247092,LastTimestamp:2025-05-16 00:15:15.108790085 +0000 UTC m=+1.676247092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:15:15.116990 kubelet[2249]: I0516 00:15:15.116980 2249 factory.go:221] Registration of the containerd container factory successfully May 16 00:15:15.128743 kubelet[2249]: I0516 00:15:15.128686 2249 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:15:15.128813 kubelet[2249]: I0516 00:15:15.128777 2249 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:15:15.128813 kubelet[2249]: I0516 00:15:15.128793 2249 state_mem.go:36] "Initialized new in-memory state store" May 16 00:15:15.129903 kubelet[2249]: I0516 00:15:15.129763 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:15:15.131049 kubelet[2249]: I0516 00:15:15.131026 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:15:15.131049 kubelet[2249]: I0516 00:15:15.131050 2249 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:15:15.131136 kubelet[2249]: I0516 00:15:15.131092 2249 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:15:15.131136 kubelet[2249]: I0516 00:15:15.131101 2249 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:15:15.131590 kubelet[2249]: W0516 00:15:15.131522 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:15.131590 kubelet[2249]: E0516 00:15:15.131560 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:15.131869 kubelet[2249]: E0516 00:15:15.131827 2249 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:15:15.132379 kubelet[2249]: I0516 00:15:15.132200 2249 policy_none.go:49] "None policy: Start" May 16 00:15:15.132379 kubelet[2249]: I0516 00:15:15.132231 2249 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:15:15.132379 kubelet[2249]: I0516 00:15:15.132242 2249 state_mem.go:35] "Initializing new in-memory state store" May 16 00:15:15.138696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:15:15.159302 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:15:15.169978 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:15:15.171023 kubelet[2249]: I0516 00:15:15.171003 2249 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:15:15.171236 kubelet[2249]: I0516 00:15:15.171206 2249 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:15:15.171320 kubelet[2249]: I0516 00:15:15.171235 2249 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:15:15.171472 kubelet[2249]: I0516 00:15:15.171450 2249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:15:15.172180 kubelet[2249]: E0516 00:15:15.172147 2249 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:15:15.172237 kubelet[2249]: E0516 00:15:15.172182 2249 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:15:15.239208 systemd[1]: Created slice kubepods-burstable-poddf07c76b5a3a27ecfc806e1c65ec41c4.slice - libcontainer container kubepods-burstable-poddf07c76b5a3a27ecfc806e1c65ec41c4.slice. May 16 00:15:15.260630 kubelet[2249]: E0516 00:15:15.260590 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:15.263600 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 00:15:15.273013 kubelet[2249]: I0516 00:15:15.272990 2249 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:15:15.273395 kubelet[2249]: E0516 00:15:15.273360 2249 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 16 00:15:15.277329 kubelet[2249]: E0516 00:15:15.277303 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:15.279865 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 00:15:15.281430 kubelet[2249]: E0516 00:15:15.281411 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:15.316728 kubelet[2249]: I0516 00:15:15.316595 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:15.316728 kubelet[2249]: I0516 00:15:15.316624 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:15.316728 kubelet[2249]: I0516 00:15:15.316643 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:15.316728 kubelet[2249]: I0516 00:15:15.316661 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:15.316728 kubelet[2249]: I0516 00:15:15.316679 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:15:15.316957 kubelet[2249]: I0516 00:15:15.316695 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:15.316957 kubelet[2249]: I0516 00:15:15.316709 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:15.316957 kubelet[2249]: I0516 00:15:15.316727 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:15.316957 kubelet[2249]: I0516 00:15:15.316741 2249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:15.316957 kubelet[2249]: E0516 00:15:15.316766 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" May 16 00:15:15.475047 kubelet[2249]: I0516 00:15:15.475020 2249 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:15:15.475276 kubelet[2249]: E0516 00:15:15.475243 2249 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 16 00:15:15.561832 kubelet[2249]: E0516 00:15:15.561796 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:15.562538 containerd[1506]: time="2025-05-16T00:15:15.562493242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df07c76b5a3a27ecfc806e1c65ec41c4,Namespace:kube-system,Attempt:0,}" May 16 00:15:15.577791 kubelet[2249]: E0516 00:15:15.577697 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:15.578156 containerd[1506]: time="2025-05-16T00:15:15.578124572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 00:15:15.582667 kubelet[2249]: E0516 00:15:15.582638 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:15.583050 containerd[1506]: time="2025-05-16T00:15:15.583020736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 00:15:15.718305 kubelet[2249]: E0516 00:15:15.718260 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" May 16 00:15:15.876985 kubelet[2249]: I0516 00:15:15.876904 2249 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:15:15.877309 kubelet[2249]: E0516 00:15:15.877270 2249 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 16 00:15:15.937415 kubelet[2249]: W0516 00:15:15.937337 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:15.937510 kubelet[2249]: E0516 00:15:15.937424 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:16.125890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958824510.mount: Deactivated successfully. May 16 00:15:16.132573 containerd[1506]: time="2025-05-16T00:15:16.132452944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:15:16.135323 containerd[1506]: time="2025-05-16T00:15:16.135254118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 16 00:15:16.136156 containerd[1506]: time="2025-05-16T00:15:16.136128358Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:15:16.137975 containerd[1506]: time="2025-05-16T00:15:16.137945927Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:15:16.138794 containerd[1506]: time="2025-05-16T00:15:16.138740127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:15:16.139743 containerd[1506]: time="2025-05-16T00:15:16.139707451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:15:16.140436 containerd[1506]: time="2025-05-16T00:15:16.140407594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:15:16.141319 containerd[1506]: time="2025-05-16T00:15:16.141291242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:15:16.142966 containerd[1506]: time="2025-05-16T00:15:16.142932080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.638452ms" May 16 00:15:16.143631 containerd[1506]: time="2025-05-16T00:15:16.143602176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.391723ms" May 16 00:15:16.146651 containerd[1506]: time="2025-05-16T00:15:16.146615078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.024172ms" May 16 00:15:16.306826 containerd[1506]: time="2025-05-16T00:15:16.306580127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:16.306826 containerd[1506]: time="2025-05-16T00:15:16.306637224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:16.306826 containerd[1506]: time="2025-05-16T00:15:16.306651450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.306826 containerd[1506]: time="2025-05-16T00:15:16.306722053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.307423 containerd[1506]: time="2025-05-16T00:15:16.305655121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:16.307423 containerd[1506]: time="2025-05-16T00:15:16.306974055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:16.307423 containerd[1506]: time="2025-05-16T00:15:16.306988052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.307423 containerd[1506]: time="2025-05-16T00:15:16.307088971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.309056 containerd[1506]: time="2025-05-16T00:15:16.307749971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:16.309056 containerd[1506]: time="2025-05-16T00:15:16.307799574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:16.309056 containerd[1506]: time="2025-05-16T00:15:16.307813049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.309056 containerd[1506]: time="2025-05-16T00:15:16.307883020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:16.329356 systemd[1]: Started cri-containerd-bbab5febde1637d694f91230db5f6c7cbe253a41d5b4b0d379d7fe58507eb824.scope - libcontainer container bbab5febde1637d694f91230db5f6c7cbe253a41d5b4b0d379d7fe58507eb824. May 16 00:15:16.331208 kubelet[2249]: W0516 00:15:16.331141 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:16.331208 kubelet[2249]: E0516 00:15:16.331179 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:16.334007 systemd[1]: Started cri-containerd-3d08056c5d3a54bed205923c513008de7cda47c8daa97bc7aecdda8825bc25b0.scope - libcontainer container 3d08056c5d3a54bed205923c513008de7cda47c8daa97bc7aecdda8825bc25b0. May 16 00:15:16.336109 systemd[1]: Started cri-containerd-4b955e58f157846e069d2752bd843badba149a800fbbc914d1beba9a9b4421b0.scope - libcontainer container 4b955e58f157846e069d2752bd843badba149a800fbbc914d1beba9a9b4421b0. May 16 00:15:16.369712 containerd[1506]: time="2025-05-16T00:15:16.369649627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df07c76b5a3a27ecfc806e1c65ec41c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbab5febde1637d694f91230db5f6c7cbe253a41d5b4b0d379d7fe58507eb824\"" May 16 00:15:16.370683 kubelet[2249]: E0516 00:15:16.370628 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:16.372940 containerd[1506]: time="2025-05-16T00:15:16.372858616Z" level=info msg="CreateContainer within sandbox \"bbab5febde1637d694f91230db5f6c7cbe253a41d5b4b0d379d7fe58507eb824\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:15:16.375447 containerd[1506]: time="2025-05-16T00:15:16.375409030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d08056c5d3a54bed205923c513008de7cda47c8daa97bc7aecdda8825bc25b0\"" May 16 00:15:16.377285 kubelet[2249]: E0516 00:15:16.377267 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:16.380292 containerd[1506]: time="2025-05-16T00:15:16.380249940Z" level=info msg="CreateContainer within sandbox \"3d08056c5d3a54bed205923c513008de7cda47c8daa97bc7aecdda8825bc25b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:15:16.381518 containerd[1506]: time="2025-05-16T00:15:16.381450071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b955e58f157846e069d2752bd843badba149a800fbbc914d1beba9a9b4421b0\"" May 16 00:15:16.382464 kubelet[2249]: E0516 00:15:16.382428 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:16.384450 containerd[1506]: time="2025-05-16T00:15:16.384376370Z" level=info msg="CreateContainer within sandbox \"4b955e58f157846e069d2752bd843badba149a800fbbc914d1beba9a9b4421b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:15:16.391288 containerd[1506]: time="2025-05-16T00:15:16.391179340Z" level=info msg="CreateContainer within sandbox \"bbab5febde1637d694f91230db5f6c7cbe253a41d5b4b0d379d7fe58507eb824\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db9988b77717a2f90bce54558fc118e4fb1107e9a9e8c766adfb771dca1ba6e1\"" May 16 00:15:16.391851 containerd[1506]: time="2025-05-16T00:15:16.391823338Z" level=info msg="StartContainer for \"db9988b77717a2f90bce54558fc118e4fb1107e9a9e8c766adfb771dca1ba6e1\"" May 16 00:15:16.408416 containerd[1506]: time="2025-05-16T00:15:16.408379463Z" level=info msg="CreateContainer within sandbox \"4b955e58f157846e069d2752bd843badba149a800fbbc914d1beba9a9b4421b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb048e21a046c6338d0ede7cd62f7ed253e7ca0182f4251249f72dc18a47c5e6\"" May 16 00:15:16.408869 containerd[1506]: time="2025-05-16T00:15:16.408840778Z" level=info msg="StartContainer for \"cb048e21a046c6338d0ede7cd62f7ed253e7ca0182f4251249f72dc18a47c5e6\"" May 16 00:15:16.410487 containerd[1506]: time="2025-05-16T00:15:16.410377000Z" level=info msg="CreateContainer within sandbox \"3d08056c5d3a54bed205923c513008de7cda47c8daa97bc7aecdda8825bc25b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c74a7f4c96040ba97eea5f324dbb5d52e17e9be97db7a575dc9777fc4c144b63\"" May 16 00:15:16.411655 containerd[1506]: time="2025-05-16T00:15:16.410716196Z" level=info msg="StartContainer for \"c74a7f4c96040ba97eea5f324dbb5d52e17e9be97db7a575dc9777fc4c144b63\"" May 16 00:15:16.422372 systemd[1]: Started cri-containerd-db9988b77717a2f90bce54558fc118e4fb1107e9a9e8c766adfb771dca1ba6e1.scope - libcontainer container db9988b77717a2f90bce54558fc118e4fb1107e9a9e8c766adfb771dca1ba6e1. May 16 00:15:16.434350 systemd[1]: Started cri-containerd-cb048e21a046c6338d0ede7cd62f7ed253e7ca0182f4251249f72dc18a47c5e6.scope - libcontainer container cb048e21a046c6338d0ede7cd62f7ed253e7ca0182f4251249f72dc18a47c5e6. May 16 00:15:16.438716 systemd[1]: Started cri-containerd-c74a7f4c96040ba97eea5f324dbb5d52e17e9be97db7a575dc9777fc4c144b63.scope - libcontainer container c74a7f4c96040ba97eea5f324dbb5d52e17e9be97db7a575dc9777fc4c144b63. May 16 00:15:16.472320 containerd[1506]: time="2025-05-16T00:15:16.472054509Z" level=info msg="StartContainer for \"db9988b77717a2f90bce54558fc118e4fb1107e9a9e8c766adfb771dca1ba6e1\" returns successfully" May 16 00:15:16.480645 containerd[1506]: time="2025-05-16T00:15:16.480598425Z" level=info msg="StartContainer for \"cb048e21a046c6338d0ede7cd62f7ed253e7ca0182f4251249f72dc18a47c5e6\" returns successfully" May 16 00:15:16.486910 containerd[1506]: time="2025-05-16T00:15:16.486871201Z" level=info msg="StartContainer for \"c74a7f4c96040ba97eea5f324dbb5d52e17e9be97db7a575dc9777fc4c144b63\" returns successfully" May 16 00:15:16.514493 kubelet[2249]: W0516 00:15:16.513506 2249 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 16 00:15:16.514493 kubelet[2249]: E0516 00:15:16.513566 2249 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" May 16 00:15:16.519145 kubelet[2249]: E0516 00:15:16.519111 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" May 16 00:15:16.678843 kubelet[2249]: I0516 00:15:16.678718 2249 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:15:17.141147 kubelet[2249]: E0516 00:15:17.141035 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:17.141147 kubelet[2249]: E0516 00:15:17.141145 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:17.143483 kubelet[2249]: E0516 00:15:17.143437 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:17.143812 kubelet[2249]: E0516 00:15:17.143791 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:17.152630 kubelet[2249]: E0516 00:15:17.152599 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:17.152730 kubelet[2249]: E0516 00:15:17.152707 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:17.591986 kubelet[2249]: I0516 00:15:17.591857 2249 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:15:17.591986 kubelet[2249]: E0516 00:15:17.591892 2249 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:15:17.613463 kubelet[2249]: E0516 00:15:17.613420 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:17.714201 kubelet[2249]: E0516 00:15:17.714155 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:17.814876 kubelet[2249]: E0516 00:15:17.814835 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:17.915777 kubelet[2249]: E0516 00:15:17.915690 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.016162 kubelet[2249]: E0516 00:15:18.016093 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.117123 kubelet[2249]: E0516 00:15:18.117080 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.147441 kubelet[2249]: E0516 00:15:18.147409 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:18.147538 kubelet[2249]: E0516 00:15:18.147526 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:18.147749 kubelet[2249]: E0516 00:15:18.147732 2249 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:15:18.147837 kubelet[2249]: E0516 00:15:18.147806 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:18.217287 kubelet[2249]: E0516 00:15:18.217195 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.317856 kubelet[2249]: E0516 00:15:18.317814 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.418609 kubelet[2249]: E0516 00:15:18.418567 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.519488 kubelet[2249]: E0516 00:15:18.519364 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.620339 kubelet[2249]: E0516 00:15:18.620288 2249 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:18.716705 kubelet[2249]: I0516 00:15:18.716675 2249 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:15:18.762832 kubelet[2249]: I0516 00:15:18.762794 2249 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:15:18.937647 kubelet[2249]: I0516 00:15:18.937518 2249 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:15:19.105887 kubelet[2249]: I0516 00:15:19.105850 2249 apiserver.go:52] "Watching apiserver" May 16 00:15:19.107749 kubelet[2249]: E0516 00:15:19.107712 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:19.116128 kubelet[2249]: I0516 00:15:19.116092 2249 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:15:19.148125 kubelet[2249]: E0516 00:15:19.148096 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:19.148125 kubelet[2249]: I0516 00:15:19.148097 2249 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:15:19.152377 kubelet[2249]: E0516 00:15:19.152350 2249 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:15:19.152502 kubelet[2249]: E0516 00:15:19.152473 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:19.877996 systemd[1]: Reload requested from client PID 2529 ('systemctl') (unit session-7.scope)... May 16 00:15:19.878012 systemd[1]: Reloading... May 16 00:15:19.959259 zram_generator::config[2582]: No configuration found. May 16 00:15:20.063757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:15:20.149968 kubelet[2249]: E0516 00:15:20.149844 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:20.186509 systemd[1]: Reloading finished in 308 ms. May 16 00:15:20.213545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:20.225369 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:15:20.225811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:20.225890 systemd[1]: kubelet.service: Consumed 953ms CPU time, 133.7M memory peak. May 16 00:15:20.238546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:15:20.425468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:15:20.430467 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:15:20.468134 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:15:20.468134 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:15:20.468134 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:15:20.468526 kubelet[2618]: I0516 00:15:20.468208 2618 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:15:20.476333 kubelet[2618]: I0516 00:15:20.476280 2618 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:15:20.476333 kubelet[2618]: I0516 00:15:20.476314 2618 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:15:20.476643 kubelet[2618]: I0516 00:15:20.476618 2618 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:15:20.477800 kubelet[2618]: I0516 00:15:20.477774 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:15:20.480127 kubelet[2618]: I0516 00:15:20.480084 2618 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:15:20.483709 kubelet[2618]: E0516 00:15:20.483678 2618 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:15:20.483709 kubelet[2618]: I0516 00:15:20.483701 2618 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:15:20.489557 kubelet[2618]: I0516 00:15:20.489537 2618 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:15:20.489785 kubelet[2618]: I0516 00:15:20.489747 2618 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:15:20.490070 kubelet[2618]: I0516 00:15:20.489782 2618 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:15:20.490143 kubelet[2618]: I0516 00:15:20.490077 2618 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:15:20.490143 kubelet[2618]: I0516 00:15:20.490087 2618 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:15:20.490143 kubelet[2618]: I0516 00:15:20.490140 2618 state_mem.go:36] "Initialized new in-memory state store" May 16 00:15:20.490331 kubelet[2618]: I0516 00:15:20.490314 2618 kubelet.go:446] "Attempting to sync node with API server" May 16 00:15:20.490360 kubelet[2618]: I0516 00:15:20.490338 2618 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:15:20.490360 kubelet[2618]: I0516 00:15:20.490354 2618 kubelet.go:352] "Adding apiserver pod source" May 16 00:15:20.490412 kubelet[2618]: I0516 00:15:20.490364 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:15:20.491232 kubelet[2618]: I0516 00:15:20.491188 2618 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:15:20.491864 kubelet[2618]: I0516 00:15:20.491833 2618 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:15:20.492539 kubelet[2618]: I0516 00:15:20.492469 2618 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:15:20.492539 kubelet[2618]: I0516 00:15:20.492508 2618 server.go:1287] "Started kubelet" May 16 00:15:20.493903 kubelet[2618]: I0516 00:15:20.493853 2618 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:15:20.495664 kubelet[2618]: I0516 00:15:20.493950 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:15:20.495664 kubelet[2618]: I0516 00:15:20.494711 2618 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.496971 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.497323 2618 server.go:479] "Adding debug handlers to kubelet server" May 16 00:15:20.499279 kubelet[2618]: E0516 00:15:20.498480 2618 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.498528 2618 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.498595 2618 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.498699 2618 reconciler.go:26] "Reconciler: start to sync state" May 16 00:15:20.499279 kubelet[2618]: I0516 00:15:20.498761 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:15:20.502438 kubelet[2618]: I0516 00:15:20.502410 2618 factory.go:221] Registration of the systemd container factory successfully May 16 00:15:20.502570 kubelet[2618]: I0516 00:15:20.502531 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:15:20.504787 kubelet[2618]: I0516 00:15:20.504691 2618 factory.go:221] Registration of the containerd container factory successfully May 16 00:15:20.508375 kubelet[2618]: I0516 00:15:20.508337 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:15:20.508622 kubelet[2618]: E0516 00:15:20.508573 2618 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:15:20.509806 kubelet[2618]: I0516 00:15:20.509786 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:15:20.509806 kubelet[2618]: I0516 00:15:20.509808 2618 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:15:20.509873 kubelet[2618]: I0516 00:15:20.509830 2618 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:15:20.509873 kubelet[2618]: I0516 00:15:20.509840 2618 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:15:20.509922 kubelet[2618]: E0516 00:15:20.509890 2618 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:15:20.540953 kubelet[2618]: I0516 00:15:20.540923 2618 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:15:20.540953 kubelet[2618]: I0516 00:15:20.540943 2618 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:15:20.540953 kubelet[2618]: I0516 00:15:20.540960 2618 state_mem.go:36] "Initialized new in-memory state store" May 16 00:15:20.541122 kubelet[2618]: I0516 00:15:20.541101 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:15:20.541143 kubelet[2618]: I0516 00:15:20.541111 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:15:20.541143 kubelet[2618]: I0516 00:15:20.541130 2618 policy_none.go:49] "None policy: Start" May 16 00:15:20.541143 kubelet[2618]: I0516 00:15:20.541139 2618 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:15:20.541203 kubelet[2618]: I0516 00:15:20.541148 2618 state_mem.go:35] "Initializing new in-memory state store" May 16 00:15:20.541279 kubelet[2618]: I0516 00:15:20.541267 2618 state_mem.go:75] "Updated machine memory state" May 16 00:15:20.545111 kubelet[2618]: I0516 00:15:20.545089 2618 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:15:20.545274 kubelet[2618]: I0516 00:15:20.545255 2618 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:15:20.545305 kubelet[2618]: I0516 00:15:20.545274 2618 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:15:20.545567 kubelet[2618]: I0516 00:15:20.545502 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:15:20.546470 kubelet[2618]: E0516 00:15:20.546454 2618 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:15:20.610846 kubelet[2618]: I0516 00:15:20.610799 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:15:20.610846 kubelet[2618]: I0516 00:15:20.610825 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.610971 kubelet[2618]: I0516 00:15:20.610911 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:15:20.615733 kubelet[2618]: E0516 00:15:20.615683 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:15:20.615802 kubelet[2618]: E0516 00:15:20.615759 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:15:20.615964 kubelet[2618]: E0516 00:15:20.615947 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.650383 kubelet[2618]: I0516 00:15:20.650362 2618 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:15:20.656255 kubelet[2618]: I0516 00:15:20.656238 2618 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:15:20.656301 kubelet[2618]: I0516 00:15:20.656284 2618 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:15:20.700180 kubelet[2618]: I0516 00:15:20.700094 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:15:20.700180 kubelet[2618]: I0516 00:15:20.700127 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.700180 kubelet[2618]: I0516 00:15:20.700147 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.700180 kubelet[2618]: I0516 00:15:20.700168 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.700334 kubelet[2618]: I0516 00:15:20.700196 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:20.700334 kubelet[2618]: I0516 00:15:20.700211 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:20.700334 kubelet[2618]: I0516 00:15:20.700267 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df07c76b5a3a27ecfc806e1c65ec41c4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df07c76b5a3a27ecfc806e1c65ec41c4\") " pod="kube-system/kube-apiserver-localhost" May 16 00:15:20.700334 kubelet[2618]: I0516 00:15:20.700285 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.700334 kubelet[2618]: I0516 00:15:20.700299 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:15:20.867016 sudo[2655]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:15:20.867409 sudo[2655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 00:15:20.916273 kubelet[2618]: E0516 00:15:20.915937 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:20.916273 kubelet[2618]: E0516 00:15:20.916085 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:20.916273 kubelet[2618]: E0516 00:15:20.916169 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:21.321976 sudo[2655]: pam_unix(sudo:session): session closed for user root May 16 00:15:21.491357 kubelet[2618]: I0516 00:15:21.491319 2618 apiserver.go:52] "Watching apiserver" May 16 00:15:21.499536 kubelet[2618]: I0516 00:15:21.499492 2618 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:15:21.525451 kubelet[2618]: I0516 00:15:21.525152 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:15:21.525451 kubelet[2618]: I0516 00:15:21.525270 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:15:21.525451 kubelet[2618]: I0516 00:15:21.525375 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:15:21.668319 kubelet[2618]: E0516 00:15:21.668170 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:15:21.668448 kubelet[2618]: E0516 00:15:21.668402 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:21.669613 kubelet[2618]: E0516 00:15:21.668977 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:15:21.669922 kubelet[2618]: E0516 00:15:21.669846 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:21.670257 kubelet[2618]: E0516 00:15:21.670202 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 00:15:21.670414 kubelet[2618]: E0516 00:15:21.670383 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:21.781418 kubelet[2618]: I0516 00:15:21.781300 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.781284796 podStartE2EDuration="3.781284796s" podCreationTimestamp="2025-05-16 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:21.781065194 +0000 UTC m=+1.346549159" watchObservedRunningTime="2025-05-16 00:15:21.781284796 +0000 UTC m=+1.346768761" May 16 00:15:21.795268 kubelet[2618]: I0516 00:15:21.795162 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.795147386 podStartE2EDuration="3.795147386s" podCreationTimestamp="2025-05-16 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:21.788722128 +0000 UTC m=+1.354206093" watchObservedRunningTime="2025-05-16 00:15:21.795147386 +0000 UTC m=+1.360631351" May 16 00:15:21.805801 kubelet[2618]: I0516 00:15:21.805754 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8057437739999997 podStartE2EDuration="3.805743774s" podCreationTimestamp="2025-05-16 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:21.795391355 +0000 UTC m=+1.360875320" watchObservedRunningTime="2025-05-16 00:15:21.805743774 +0000 UTC m=+1.371227739" May 16 00:15:22.526825 kubelet[2618]: E0516 00:15:22.526551 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:22.527362 kubelet[2618]: E0516 00:15:22.527057 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:22.527362 kubelet[2618]: E0516 00:15:22.527251 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:22.540152 sudo[1693]: pam_unix(sudo:session): session closed for user root May 16 00:15:22.541607 sshd[1692]: Connection closed by 10.0.0.1 port 47266 May 16 00:15:22.542045 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 16 00:15:22.546458 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:47266.service: Deactivated successfully. May 16 00:15:22.548848 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:15:22.549057 systemd[1]: session-7.scope: Consumed 4.798s CPU time, 250.7M memory peak. May 16 00:15:22.550332 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. May 16 00:15:22.551287 systemd-logind[1494]: Removed session 7. May 16 00:15:23.527658 kubelet[2618]: E0516 00:15:23.527620 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:25.974359 kubelet[2618]: I0516 00:15:25.974320 2618 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:15:25.974782 containerd[1506]: time="2025-05-16T00:15:25.974676310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:15:25.975017 kubelet[2618]: I0516 00:15:25.974856 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:15:26.763833 systemd[1]: Created slice kubepods-besteffort-pod43d49b6a_edff_4cd4_801a_aacb4eb9adc7.slice - libcontainer container kubepods-besteffort-pod43d49b6a_edff_4cd4_801a_aacb4eb9adc7.slice. May 16 00:15:26.782181 systemd[1]: Created slice kubepods-burstable-pod34120b60_493f_4364_85eb_7f0e69e4dd3d.slice - libcontainer container kubepods-burstable-pod34120b60_493f_4364_85eb_7f0e69e4dd3d.slice. May 16 00:15:26.838742 kubelet[2618]: I0516 00:15:26.838685 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-etc-cni-netd\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838742 kubelet[2618]: I0516 00:15:26.838727 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-xtables-lock\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838742 kubelet[2618]: I0516 00:15:26.838743 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34120b60-493f-4364-85eb-7f0e69e4dd3d-clustermesh-secrets\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838757 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-run\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838773 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-hostproc\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838787 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhn7\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-kube-api-access-nvhn7\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838803 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43d49b6a-edff-4cd4-801a-aacb4eb9adc7-xtables-lock\") pod \"kube-proxy-j2t2s\" (UID: \"43d49b6a-edff-4cd4-801a-aacb4eb9adc7\") " pod="kube-system/kube-proxy-j2t2s" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838817 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52stw\" (UniqueName: \"kubernetes.io/projected/43d49b6a-edff-4cd4-801a-aacb4eb9adc7-kube-api-access-52stw\") pod \"kube-proxy-j2t2s\" (UID: \"43d49b6a-edff-4cd4-801a-aacb4eb9adc7\") " pod="kube-system/kube-proxy-j2t2s" May 16 00:15:26.838935 kubelet[2618]: I0516 00:15:26.838830 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-lib-modules\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838843 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-hubble-tls\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838858 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-bpf-maps\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838871 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-net\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838894 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43d49b6a-edff-4cd4-801a-aacb4eb9adc7-kube-proxy\") pod \"kube-proxy-j2t2s\" (UID: \"43d49b6a-edff-4cd4-801a-aacb4eb9adc7\") " pod="kube-system/kube-proxy-j2t2s" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838907 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43d49b6a-edff-4cd4-801a-aacb4eb9adc7-lib-modules\") pod \"kube-proxy-j2t2s\" (UID: \"43d49b6a-edff-4cd4-801a-aacb4eb9adc7\") " pod="kube-system/kube-proxy-j2t2s" May 16 00:15:26.839079 kubelet[2618]: I0516 00:15:26.838920 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-config-path\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839265 kubelet[2618]: I0516 00:15:26.838934 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cni-path\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839265 kubelet[2618]: I0516 00:15:26.838948 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-kernel\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:26.839265 kubelet[2618]: I0516 00:15:26.838963 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-cgroup\") pod \"cilium-94l2v\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " pod="kube-system/cilium-94l2v" May 16 00:15:27.080618 kubelet[2618]: E0516 00:15:27.080479 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.081629 containerd[1506]: time="2025-05-16T00:15:27.081595225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2t2s,Uid:43d49b6a-edff-4cd4-801a-aacb4eb9adc7,Namespace:kube-system,Attempt:0,}" May 16 00:15:27.088849 kubelet[2618]: E0516 00:15:27.088817 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.089644 containerd[1506]: time="2025-05-16T00:15:27.089583310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94l2v,Uid:34120b60-493f-4364-85eb-7f0e69e4dd3d,Namespace:kube-system,Attempt:0,}" May 16 00:15:27.117622 systemd[1]: Created slice kubepods-besteffort-podb71100fa_0210_4093_b377_75bc2bdb1e2e.slice - libcontainer container kubepods-besteffort-podb71100fa_0210_4093_b377_75bc2bdb1e2e.slice. May 16 00:15:27.125868 containerd[1506]: time="2025-05-16T00:15:27.125767274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:27.125868 containerd[1506]: time="2025-05-16T00:15:27.125824824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:27.125868 containerd[1506]: time="2025-05-16T00:15:27.125839091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.126614 containerd[1506]: time="2025-05-16T00:15:27.125913934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.145646 containerd[1506]: time="2025-05-16T00:15:27.145246472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:27.145646 containerd[1506]: time="2025-05-16T00:15:27.145318709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:27.145646 containerd[1506]: time="2025-05-16T00:15:27.145332495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.145646 containerd[1506]: time="2025-05-16T00:15:27.145432626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.152401 systemd[1]: Started cri-containerd-8b53368cbd58f8143cc1fd1156149cc9c9b0c0785954a4bd11270a80ac56676d.scope - libcontainer container 8b53368cbd58f8143cc1fd1156149cc9c9b0c0785954a4bd11270a80ac56676d. May 16 00:15:27.159009 systemd[1]: Started cri-containerd-1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9.scope - libcontainer container 1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9. May 16 00:15:27.178449 containerd[1506]: time="2025-05-16T00:15:27.178337943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2t2s,Uid:43d49b6a-edff-4cd4-801a-aacb4eb9adc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b53368cbd58f8143cc1fd1156149cc9c9b0c0785954a4bd11270a80ac56676d\"" May 16 00:15:27.179150 kubelet[2618]: E0516 00:15:27.179130 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.182193 containerd[1506]: time="2025-05-16T00:15:27.182105735Z" level=info msg="CreateContainer within sandbox \"8b53368cbd58f8143cc1fd1156149cc9c9b0c0785954a4bd11270a80ac56676d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:15:27.185683 containerd[1506]: time="2025-05-16T00:15:27.185591528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94l2v,Uid:34120b60-493f-4364-85eb-7f0e69e4dd3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\"" May 16 00:15:27.187273 kubelet[2618]: E0516 00:15:27.186513 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.189111 containerd[1506]: time="2025-05-16T00:15:27.188985337Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:15:27.200982 containerd[1506]: time="2025-05-16T00:15:27.200939271Z" level=info msg="CreateContainer within sandbox \"8b53368cbd58f8143cc1fd1156149cc9c9b0c0785954a4bd11270a80ac56676d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c77e49286aebcadebe3918edd8d498f0aa944c2fe6cdb06e7fa7936fddc80cd7\"" May 16 00:15:27.201497 containerd[1506]: time="2025-05-16T00:15:27.201452048Z" level=info msg="StartContainer for \"c77e49286aebcadebe3918edd8d498f0aa944c2fe6cdb06e7fa7936fddc80cd7\"" May 16 00:15:27.227345 systemd[1]: Started cri-containerd-c77e49286aebcadebe3918edd8d498f0aa944c2fe6cdb06e7fa7936fddc80cd7.scope - libcontainer container c77e49286aebcadebe3918edd8d498f0aa944c2fe6cdb06e7fa7936fddc80cd7. May 16 00:15:27.241066 kubelet[2618]: I0516 00:15:27.241025 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71100fa-0210-4093-b377-75bc2bdb1e2e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7455x\" (UID: \"b71100fa-0210-4093-b377-75bc2bdb1e2e\") " pod="kube-system/cilium-operator-6c4d7847fc-7455x" May 16 00:15:27.241066 kubelet[2618]: I0516 00:15:27.241065 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvbn\" (UniqueName: \"kubernetes.io/projected/b71100fa-0210-4093-b377-75bc2bdb1e2e-kube-api-access-nqvbn\") pod \"cilium-operator-6c4d7847fc-7455x\" (UID: \"b71100fa-0210-4093-b377-75bc2bdb1e2e\") " pod="kube-system/cilium-operator-6c4d7847fc-7455x" May 16 00:15:27.258436 containerd[1506]: time="2025-05-16T00:15:27.258342981Z" level=info msg="StartContainer for \"c77e49286aebcadebe3918edd8d498f0aa944c2fe6cdb06e7fa7936fddc80cd7\" returns successfully" May 16 00:15:27.425516 kubelet[2618]: E0516 00:15:27.425369 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.425936 containerd[1506]: time="2025-05-16T00:15:27.425884281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7455x,Uid:b71100fa-0210-4093-b377-75bc2bdb1e2e,Namespace:kube-system,Attempt:0,}" May 16 00:15:27.451401 containerd[1506]: time="2025-05-16T00:15:27.451189191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:27.451401 containerd[1506]: time="2025-05-16T00:15:27.451372321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:27.451401 containerd[1506]: time="2025-05-16T00:15:27.451399473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.451659 containerd[1506]: time="2025-05-16T00:15:27.451512638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:27.478398 systemd[1]: Started cri-containerd-a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903.scope - libcontainer container a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903. May 16 00:15:27.516457 containerd[1506]: time="2025-05-16T00:15:27.516372570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7455x,Uid:b71100fa-0210-4093-b377-75bc2bdb1e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\"" May 16 00:15:27.517362 kubelet[2618]: E0516 00:15:27.517186 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.537965 kubelet[2618]: E0516 00:15:27.537538 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:29.726490 kubelet[2618]: E0516 00:15:29.726411 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:29.739558 kubelet[2618]: I0516 00:15:29.739481 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2t2s" podStartSLOduration=3.739451778 podStartE2EDuration="3.739451778s" podCreationTimestamp="2025-05-16 00:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:27.54603217 +0000 UTC m=+7.111516135" watchObservedRunningTime="2025-05-16 00:15:29.739451778 +0000 UTC m=+9.304935743" May 16 00:15:30.543762 kubelet[2618]: E0516 00:15:30.543714 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:30.784659 kubelet[2618]: E0516 00:15:30.784555 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:31.489764 kubelet[2618]: E0516 00:15:31.489711 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:31.545254 kubelet[2618]: E0516 00:15:31.545009 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:31.545254 kubelet[2618]: E0516 00:15:31.545152 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:31.545402 kubelet[2618]: E0516 00:15:31.545271 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:32.785910 update_engine[1495]: I20250516 00:15:32.785832 1495 update_attempter.cc:509] Updating boot flags... May 16 00:15:32.976277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2999) May 16 00:15:33.021250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2998) May 16 00:15:33.065686 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2998) May 16 00:15:36.685702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004827967.mount: Deactivated successfully. May 16 00:15:39.302209 containerd[1506]: time="2025-05-16T00:15:39.302153649Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:39.302945 containerd[1506]: time="2025-05-16T00:15:39.302890643Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 00:15:39.304016 containerd[1506]: time="2025-05-16T00:15:39.303977436Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:39.305712 containerd[1506]: time="2025-05-16T00:15:39.305678511Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.116662135s" May 16 00:15:39.305777 containerd[1506]: time="2025-05-16T00:15:39.305712264Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 00:15:39.312206 containerd[1506]: time="2025-05-16T00:15:39.312173844Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:15:39.322859 containerd[1506]: time="2025-05-16T00:15:39.322814842Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:15:39.343314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533546591.mount: Deactivated successfully. May 16 00:15:39.344704 containerd[1506]: time="2025-05-16T00:15:39.344657507Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\"" May 16 00:15:39.347333 containerd[1506]: time="2025-05-16T00:15:39.347299360Z" level=info msg="StartContainer for \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\"" May 16 00:15:39.381418 systemd[1]: Started cri-containerd-5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c.scope - libcontainer container 5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c. May 16 00:15:39.408893 containerd[1506]: time="2025-05-16T00:15:39.408852766Z" level=info msg="StartContainer for \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\" returns successfully" May 16 00:15:39.419262 systemd[1]: cri-containerd-5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c.scope: Deactivated successfully. May 16 00:15:39.643438 kubelet[2618]: E0516 00:15:39.643288 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:39.945272 containerd[1506]: time="2025-05-16T00:15:39.945107167Z" level=info msg="shim disconnected" id=5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c namespace=k8s.io May 16 00:15:39.945272 containerd[1506]: time="2025-05-16T00:15:39.945171909Z" level=warning msg="cleaning up after shim disconnected" id=5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c namespace=k8s.io May 16 00:15:39.945272 containerd[1506]: time="2025-05-16T00:15:39.945182168Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:15:40.340353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c-rootfs.mount: Deactivated successfully. May 16 00:15:40.584359 kubelet[2618]: E0516 00:15:40.583790 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:40.586874 containerd[1506]: time="2025-05-16T00:15:40.586003644Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:15:40.621908 containerd[1506]: time="2025-05-16T00:15:40.621773476Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\"" May 16 00:15:40.622547 containerd[1506]: time="2025-05-16T00:15:40.622487404Z" level=info msg="StartContainer for \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\"" May 16 00:15:40.664553 systemd[1]: Started cri-containerd-b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633.scope - libcontainer container b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633. May 16 00:15:40.712113 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:15:40.712445 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:15:40.712878 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 00:15:40.718667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:15:40.718902 systemd[1]: cri-containerd-b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633.scope: Deactivated successfully. May 16 00:15:40.741061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:15:40.813630 containerd[1506]: time="2025-05-16T00:15:40.813478308Z" level=info msg="StartContainer for \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\" returns successfully" May 16 00:15:40.933087 containerd[1506]: time="2025-05-16T00:15:40.932831251Z" level=info msg="shim disconnected" id=b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633 namespace=k8s.io May 16 00:15:40.933087 containerd[1506]: time="2025-05-16T00:15:40.932957259Z" level=warning msg="cleaning up after shim disconnected" id=b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633 namespace=k8s.io May 16 00:15:40.933087 containerd[1506]: time="2025-05-16T00:15:40.932972708Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:15:41.304849 containerd[1506]: time="2025-05-16T00:15:41.304784996Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:41.305511 containerd[1506]: time="2025-05-16T00:15:41.305456886Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 00:15:41.306629 containerd[1506]: time="2025-05-16T00:15:41.306586288Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:41.310482 containerd[1506]: time="2025-05-16T00:15:41.309592312Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.99738811s" May 16 00:15:41.310482 containerd[1506]: time="2025-05-16T00:15:41.309624344Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 00:15:41.319490 containerd[1506]: time="2025-05-16T00:15:41.319454523Z" level=info msg="CreateContainer within sandbox \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:15:41.331639 containerd[1506]: time="2025-05-16T00:15:41.331595815Z" level=info msg="CreateContainer within sandbox \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\"" May 16 00:15:41.332335 containerd[1506]: time="2025-05-16T00:15:41.332285367Z" level=info msg="StartContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\"" May 16 00:15:41.340544 systemd[1]: run-containerd-runc-k8s.io-b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633-runc.no5Pxd.mount: Deactivated successfully. May 16 00:15:41.340778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633-rootfs.mount: Deactivated successfully. May 16 00:15:41.361425 systemd[1]: Started cri-containerd-be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4.scope - libcontainer container be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4. May 16 00:15:41.387414 containerd[1506]: time="2025-05-16T00:15:41.387043228Z" level=info msg="StartContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" returns successfully" May 16 00:15:41.591569 kubelet[2618]: E0516 00:15:41.590753 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:41.592618 kubelet[2618]: E0516 00:15:41.592136 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:41.594546 containerd[1506]: time="2025-05-16T00:15:41.594488109Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:15:41.600115 kubelet[2618]: I0516 00:15:41.600011 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7455x" podStartSLOduration=0.806771211 podStartE2EDuration="14.599981149s" podCreationTimestamp="2025-05-16 00:15:27 +0000 UTC" firstStartedPulling="2025-05-16 00:15:27.517839045 +0000 UTC m=+7.083323010" lastFinishedPulling="2025-05-16 00:15:41.311048983 +0000 UTC m=+20.876532948" observedRunningTime="2025-05-16 00:15:41.59997112 +0000 UTC m=+21.165455085" watchObservedRunningTime="2025-05-16 00:15:41.599981149 +0000 UTC m=+21.165465114" May 16 00:15:41.621285 containerd[1506]: time="2025-05-16T00:15:41.619478280Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\"" May 16 00:15:41.621285 containerd[1506]: time="2025-05-16T00:15:41.620369382Z" level=info msg="StartContainer for \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\"" May 16 00:15:41.657450 systemd[1]: Started cri-containerd-291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9.scope - libcontainer container 291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9. May 16 00:15:41.696098 containerd[1506]: time="2025-05-16T00:15:41.696047912Z" level=info msg="StartContainer for \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\" returns successfully" May 16 00:15:41.696667 systemd[1]: cri-containerd-291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9.scope: Deactivated successfully. May 16 00:15:42.470635 containerd[1506]: time="2025-05-16T00:15:42.470553096Z" level=info msg="shim disconnected" id=291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9 namespace=k8s.io May 16 00:15:42.470635 containerd[1506]: time="2025-05-16T00:15:42.470610986Z" level=warning msg="cleaning up after shim disconnected" id=291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9 namespace=k8s.io May 16 00:15:42.470635 containerd[1506]: time="2025-05-16T00:15:42.470619802Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:15:42.595538 kubelet[2618]: E0516 00:15:42.595503 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:42.596213 kubelet[2618]: E0516 00:15:42.595572 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:42.597065 containerd[1506]: time="2025-05-16T00:15:42.597028864Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:15:42.699921 containerd[1506]: time="2025-05-16T00:15:42.699867361Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\"" May 16 00:15:42.700490 containerd[1506]: time="2025-05-16T00:15:42.700466301Z" level=info msg="StartContainer for \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\"" May 16 00:15:42.738510 systemd[1]: Started cri-containerd-363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186.scope - libcontainer container 363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186. May 16 00:15:42.775656 systemd[1]: cri-containerd-363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186.scope: Deactivated successfully. May 16 00:15:42.779288 containerd[1506]: time="2025-05-16T00:15:42.779096262Z" level=info msg="StartContainer for \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\" returns successfully" May 16 00:15:42.806180 containerd[1506]: time="2025-05-16T00:15:42.806110594Z" level=info msg="shim disconnected" id=363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186 namespace=k8s.io May 16 00:15:42.806180 containerd[1506]: time="2025-05-16T00:15:42.806171108Z" level=warning msg="cleaning up after shim disconnected" id=363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186 namespace=k8s.io May 16 00:15:42.806180 containerd[1506]: time="2025-05-16T00:15:42.806181047Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:15:42.832632 containerd[1506]: time="2025-05-16T00:15:42.832562274Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:15:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 00:15:43.340942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186-rootfs.mount: Deactivated successfully. May 16 00:15:43.599841 kubelet[2618]: E0516 00:15:43.599712 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:43.602547 containerd[1506]: time="2025-05-16T00:15:43.602309426Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:15:43.832251 containerd[1506]: time="2025-05-16T00:15:43.832156980Z" level=info msg="CreateContainer within sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\"" May 16 00:15:43.832798 containerd[1506]: time="2025-05-16T00:15:43.832764066Z" level=info msg="StartContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\"" May 16 00:15:43.861475 systemd[1]: Started cri-containerd-7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514.scope - libcontainer container 7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514. May 16 00:15:43.954411 containerd[1506]: time="2025-05-16T00:15:43.954342252Z" level=info msg="StartContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" returns successfully" May 16 00:15:44.116501 kubelet[2618]: I0516 00:15:44.116376 2618 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:15:44.340601 systemd[1]: run-containerd-runc-k8s.io-7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514-runc.Aszpdf.mount: Deactivated successfully. May 16 00:15:44.436760 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:51886.service - OpenSSH per-connection server daemon (10.0.0.1:51886). May 16 00:15:44.492753 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 51886 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:15:44.495007 sshd-session[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:44.501302 systemd-logind[1494]: New session 8 of user core. May 16 00:15:44.507409 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:15:44.512104 systemd[1]: Created slice kubepods-burstable-pod25f6b0e6_1bc8_4fa4_b125_970b3e0ae996.slice - libcontainer container kubepods-burstable-pod25f6b0e6_1bc8_4fa4_b125_970b3e0ae996.slice. May 16 00:15:44.555713 kubelet[2618]: I0516 00:15:44.555657 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f6b0e6-1bc8-4fa4-b125-970b3e0ae996-config-volume\") pod \"coredns-668d6bf9bc-zdz95\" (UID: \"25f6b0e6-1bc8-4fa4-b125-970b3e0ae996\") " pod="kube-system/coredns-668d6bf9bc-zdz95" May 16 00:15:44.555713 kubelet[2618]: I0516 00:15:44.555704 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2p9w\" (UniqueName: \"kubernetes.io/projected/25f6b0e6-1bc8-4fa4-b125-970b3e0ae996-kube-api-access-s2p9w\") pod \"coredns-668d6bf9bc-zdz95\" (UID: \"25f6b0e6-1bc8-4fa4-b125-970b3e0ae996\") " pod="kube-system/coredns-668d6bf9bc-zdz95" May 16 00:15:44.611917 kubelet[2618]: E0516 00:15:44.611864 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:44.642950 systemd[1]: Created slice kubepods-burstable-pod781610c1_ec1a_4e11_90e9_69cf7a2a2e53.slice - libcontainer container kubepods-burstable-pod781610c1_ec1a_4e11_90e9_69cf7a2a2e53.slice. May 16 00:15:44.757254 kubelet[2618]: I0516 00:15:44.757097 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/781610c1-ec1a-4e11-90e9-69cf7a2a2e53-config-volume\") pod \"coredns-668d6bf9bc-5j7bp\" (UID: \"781610c1-ec1a-4e11-90e9-69cf7a2a2e53\") " pod="kube-system/coredns-668d6bf9bc-5j7bp" May 16 00:15:44.757254 kubelet[2618]: I0516 00:15:44.757139 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvn4x\" (UniqueName: \"kubernetes.io/projected/781610c1-ec1a-4e11-90e9-69cf7a2a2e53-kube-api-access-tvn4x\") pod \"coredns-668d6bf9bc-5j7bp\" (UID: \"781610c1-ec1a-4e11-90e9-69cf7a2a2e53\") " pod="kube-system/coredns-668d6bf9bc-5j7bp" May 16 00:15:45.013772 sshd[3450]: Connection closed by 10.0.0.1 port 51886 May 16 00:15:45.014037 sshd-session[3447]: pam_unix(sshd:session): session closed for user core May 16 00:15:45.018258 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:51886.service: Deactivated successfully. May 16 00:15:45.020492 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:15:45.021392 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. May 16 00:15:45.022344 systemd-logind[1494]: Removed session 8. May 16 00:15:45.115380 kubelet[2618]: E0516 00:15:45.115315 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:45.127913 containerd[1506]: time="2025-05-16T00:15:45.127879662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zdz95,Uid:25f6b0e6-1bc8-4fa4-b125-970b3e0ae996,Namespace:kube-system,Attempt:0,}" May 16 00:15:45.245920 kubelet[2618]: E0516 00:15:45.245881 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:45.246520 containerd[1506]: time="2025-05-16T00:15:45.246472923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5j7bp,Uid:781610c1-ec1a-4e11-90e9-69cf7a2a2e53,Namespace:kube-system,Attempt:0,}" May 16 00:15:45.388621 kubelet[2618]: I0516 00:15:45.388328 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-94l2v" podStartSLOduration=7.264814597 podStartE2EDuration="19.388310473s" podCreationTimestamp="2025-05-16 00:15:26 +0000 UTC" firstStartedPulling="2025-05-16 00:15:27.18853588 +0000 UTC m=+6.754019845" lastFinishedPulling="2025-05-16 00:15:39.312031756 +0000 UTC m=+18.877515721" observedRunningTime="2025-05-16 00:15:45.388010989 +0000 UTC m=+24.953494954" watchObservedRunningTime="2025-05-16 00:15:45.388310473 +0000 UTC m=+24.953794438" May 16 00:15:45.612804 kubelet[2618]: E0516 00:15:45.612765 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:46.434695 systemd-networkd[1424]: cilium_host: Link UP May 16 00:15:46.434891 systemd-networkd[1424]: cilium_net: Link UP May 16 00:15:46.435098 systemd-networkd[1424]: cilium_net: Gained carrier May 16 00:15:46.435320 systemd-networkd[1424]: cilium_host: Gained carrier May 16 00:15:46.532325 systemd-networkd[1424]: cilium_vxlan: Link UP May 16 00:15:46.532336 systemd-networkd[1424]: cilium_vxlan: Gained carrier May 16 00:15:46.614017 kubelet[2618]: E0516 00:15:46.613976 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:46.749256 kernel: NET: Registered PF_ALG protocol family May 16 00:15:46.919443 systemd-networkd[1424]: cilium_host: Gained IPv6LL May 16 00:15:47.031458 systemd-networkd[1424]: cilium_net: Gained IPv6LL May 16 00:15:47.460626 systemd-networkd[1424]: lxc_health: Link UP May 16 00:15:47.461001 systemd-networkd[1424]: lxc_health: Gained carrier May 16 00:15:47.648261 kernel: eth0: renamed from tmp6f086 May 16 00:15:47.658252 kernel: eth0: renamed from tmp992d6 May 16 00:15:47.666637 systemd-networkd[1424]: lxc3505abf329e7: Link UP May 16 00:15:47.666916 systemd-networkd[1424]: lxc1004cf525f88: Link UP May 16 00:15:47.667229 systemd-networkd[1424]: lxc1004cf525f88: Gained carrier May 16 00:15:47.667421 systemd-networkd[1424]: lxc3505abf329e7: Gained carrier May 16 00:15:48.310465 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL May 16 00:15:48.950423 systemd-networkd[1424]: lxc_health: Gained IPv6LL May 16 00:15:49.014448 systemd-networkd[1424]: lxc3505abf329e7: Gained IPv6LL May 16 00:15:49.090443 kubelet[2618]: E0516 00:15:49.090390 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:49.529374 systemd-networkd[1424]: lxc1004cf525f88: Gained IPv6LL May 16 00:15:49.618928 kubelet[2618]: E0516 00:15:49.618875 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:50.033540 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:51894.service - OpenSSH per-connection server daemon (10.0.0.1:51894). May 16 00:15:50.082705 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 51894 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:15:50.084686 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:50.089754 systemd-logind[1494]: New session 9 of user core. May 16 00:15:50.101380 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:15:50.228322 sshd[3883]: Connection closed by 10.0.0.1 port 51894 May 16 00:15:50.228722 sshd-session[3881]: pam_unix(sshd:session): session closed for user core May 16 00:15:50.233793 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:51894.service: Deactivated successfully. May 16 00:15:50.236001 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:15:50.236776 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. May 16 00:15:50.237771 systemd-logind[1494]: Removed session 9. May 16 00:15:51.197792 containerd[1506]: time="2025-05-16T00:15:51.197612837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:51.197792 containerd[1506]: time="2025-05-16T00:15:51.197771155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:51.197792 containerd[1506]: time="2025-05-16T00:15:51.197841888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:51.198662 containerd[1506]: time="2025-05-16T00:15:51.198459009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:51.206350 containerd[1506]: time="2025-05-16T00:15:51.205979343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:15:51.206350 containerd[1506]: time="2025-05-16T00:15:51.206047692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:15:51.206350 containerd[1506]: time="2025-05-16T00:15:51.206067640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:51.206350 containerd[1506]: time="2025-05-16T00:15:51.206174251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:15:51.229357 systemd[1]: Started cri-containerd-6f0867d57462960df7d13260410ba3f62b1620adaa5a6eb21749dac37c5f9a37.scope - libcontainer container 6f0867d57462960df7d13260410ba3f62b1620adaa5a6eb21749dac37c5f9a37. May 16 00:15:51.230974 systemd[1]: Started cri-containerd-992d65ce68790179510ca39b67bbf3e2fc29778eb2be4b3365736924271ba792.scope - libcontainer container 992d65ce68790179510ca39b67bbf3e2fc29778eb2be4b3365736924271ba792. May 16 00:15:51.242586 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:51.244805 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:51.266769 containerd[1506]: time="2025-05-16T00:15:51.266717171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zdz95,Uid:25f6b0e6-1bc8-4fa4-b125-970b3e0ae996,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f0867d57462960df7d13260410ba3f62b1620adaa5a6eb21749dac37c5f9a37\"" May 16 00:15:51.268352 kubelet[2618]: E0516 00:15:51.268303 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:51.271285 containerd[1506]: time="2025-05-16T00:15:51.271250273Z" level=info msg="CreateContainer within sandbox \"6f0867d57462960df7d13260410ba3f62b1620adaa5a6eb21749dac37c5f9a37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:15:51.271959 containerd[1506]: time="2025-05-16T00:15:51.271921446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5j7bp,Uid:781610c1-ec1a-4e11-90e9-69cf7a2a2e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"992d65ce68790179510ca39b67bbf3e2fc29778eb2be4b3365736924271ba792\"" May 16 00:15:51.273335 kubelet[2618]: E0516 00:15:51.273286 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:51.275114 containerd[1506]: time="2025-05-16T00:15:51.275076034Z" level=info msg="CreateContainer within sandbox \"992d65ce68790179510ca39b67bbf3e2fc29778eb2be4b3365736924271ba792\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:15:51.302068 containerd[1506]: time="2025-05-16T00:15:51.302021519Z" level=info msg="CreateContainer within sandbox \"992d65ce68790179510ca39b67bbf3e2fc29778eb2be4b3365736924271ba792\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4978b6287b98d24ba41f861d5d296f95266296e853d7e264cac55d4655382844\"" May 16 00:15:51.302805 containerd[1506]: time="2025-05-16T00:15:51.302540987Z" level=info msg="StartContainer for \"4978b6287b98d24ba41f861d5d296f95266296e853d7e264cac55d4655382844\"" May 16 00:15:51.306856 containerd[1506]: time="2025-05-16T00:15:51.306808368Z" level=info msg="CreateContainer within sandbox \"6f0867d57462960df7d13260410ba3f62b1620adaa5a6eb21749dac37c5f9a37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59caba5a1259cced90ae8f09eb73dc00faff0d06836013fc7d3f616758efeada\"" May 16 00:15:51.308079 containerd[1506]: time="2025-05-16T00:15:51.307324159Z" level=info msg="StartContainer for \"59caba5a1259cced90ae8f09eb73dc00faff0d06836013fc7d3f616758efeada\"" May 16 00:15:51.329396 systemd[1]: Started cri-containerd-4978b6287b98d24ba41f861d5d296f95266296e853d7e264cac55d4655382844.scope - libcontainer container 4978b6287b98d24ba41f861d5d296f95266296e853d7e264cac55d4655382844. May 16 00:15:51.333002 systemd[1]: Started cri-containerd-59caba5a1259cced90ae8f09eb73dc00faff0d06836013fc7d3f616758efeada.scope - libcontainer container 59caba5a1259cced90ae8f09eb73dc00faff0d06836013fc7d3f616758efeada. May 16 00:15:51.370721 containerd[1506]: time="2025-05-16T00:15:51.370489876Z" level=info msg="StartContainer for \"59caba5a1259cced90ae8f09eb73dc00faff0d06836013fc7d3f616758efeada\" returns successfully" May 16 00:15:51.370721 containerd[1506]: time="2025-05-16T00:15:51.370495977Z" level=info msg="StartContainer for \"4978b6287b98d24ba41f861d5d296f95266296e853d7e264cac55d4655382844\" returns successfully" May 16 00:15:51.623868 kubelet[2618]: E0516 00:15:51.623793 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:51.625818 kubelet[2618]: E0516 00:15:51.625787 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:51.633626 kubelet[2618]: I0516 00:15:51.633556 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5j7bp" podStartSLOduration=24.633534604 podStartE2EDuration="24.633534604s" podCreationTimestamp="2025-05-16 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:51.632769394 +0000 UTC m=+31.198253359" watchObservedRunningTime="2025-05-16 00:15:51.633534604 +0000 UTC m=+31.199018570" May 16 00:15:51.643018 kubelet[2618]: I0516 00:15:51.642947 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zdz95" podStartSLOduration=24.642926121 podStartE2EDuration="24.642926121s" podCreationTimestamp="2025-05-16 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:51.642717969 +0000 UTC m=+31.208201934" watchObservedRunningTime="2025-05-16 00:15:51.642926121 +0000 UTC m=+31.208410086" May 16 00:15:52.204808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076217372.mount: Deactivated successfully. May 16 00:15:52.628538 kubelet[2618]: E0516 00:15:52.628461 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:52.629075 kubelet[2618]: E0516 00:15:52.628592 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:53.630710 kubelet[2618]: E0516 00:15:53.630549 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:53.630710 kubelet[2618]: E0516 00:15:53.630564 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:55.244051 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:53648.service - OpenSSH per-connection server daemon (10.0.0.1:53648). May 16 00:15:55.286207 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 53648 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:15:55.288252 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:55.292818 systemd-logind[1494]: New session 10 of user core. May 16 00:15:55.299350 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:15:55.445234 sshd[4071]: Connection closed by 10.0.0.1 port 53648 May 16 00:15:55.445616 sshd-session[4069]: pam_unix(sshd:session): session closed for user core May 16 00:15:55.450069 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:53648.service: Deactivated successfully. May 16 00:15:55.452784 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:15:55.453577 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. May 16 00:15:55.454509 systemd-logind[1494]: Removed session 10. May 16 00:16:00.459721 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). May 16 00:16:00.496339 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:00.498339 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:00.503037 systemd-logind[1494]: New session 11 of user core. May 16 00:16:00.516489 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:16:00.641556 sshd[4091]: Connection closed by 10.0.0.1 port 53658 May 16 00:16:00.642008 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 16 00:16:00.646677 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:53658.service: Deactivated successfully. May 16 00:16:00.648811 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:16:00.649652 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. May 16 00:16:00.650690 systemd-logind[1494]: Removed session 11. May 16 00:16:05.654460 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:51880.service - OpenSSH per-connection server daemon (10.0.0.1:51880). May 16 00:16:05.704703 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 51880 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:05.706610 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:05.711512 systemd-logind[1494]: New session 12 of user core. May 16 00:16:05.721364 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:16:05.838561 sshd[4107]: Connection closed by 10.0.0.1 port 51880 May 16 00:16:05.839147 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 16 00:16:05.852868 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:51880.service: Deactivated successfully. May 16 00:16:05.855921 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:16:05.858248 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. May 16 00:16:05.869902 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:51892.service - OpenSSH per-connection server daemon (10.0.0.1:51892). May 16 00:16:05.871163 systemd-logind[1494]: Removed session 12. May 16 00:16:05.903932 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 51892 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:05.905944 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:05.910749 systemd-logind[1494]: New session 13 of user core. May 16 00:16:05.918332 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:16:06.093449 sshd[4123]: Connection closed by 10.0.0.1 port 51892 May 16 00:16:06.094071 sshd-session[4120]: pam_unix(sshd:session): session closed for user core May 16 00:16:06.106387 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:51892.service: Deactivated successfully. May 16 00:16:06.108470 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:16:06.112040 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. May 16 00:16:06.118662 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:51902.service - OpenSSH per-connection server daemon (10.0.0.1:51902). May 16 00:16:06.120641 systemd-logind[1494]: Removed session 13. May 16 00:16:06.151452 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 51902 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:06.153042 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:06.157824 systemd-logind[1494]: New session 14 of user core. May 16 00:16:06.168502 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:16:06.284036 sshd[4136]: Connection closed by 10.0.0.1 port 51902 May 16 00:16:06.284398 sshd-session[4133]: pam_unix(sshd:session): session closed for user core May 16 00:16:06.288388 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:51902.service: Deactivated successfully. May 16 00:16:06.290452 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:16:06.291194 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. May 16 00:16:06.292006 systemd-logind[1494]: Removed session 14. May 16 00:16:11.296824 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:51908.service - OpenSSH per-connection server daemon (10.0.0.1:51908). May 16 00:16:11.337460 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 51908 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:11.339301 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:11.343718 systemd-logind[1494]: New session 15 of user core. May 16 00:16:11.359470 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:16:11.518316 sshd[4151]: Connection closed by 10.0.0.1 port 51908 May 16 00:16:11.518732 sshd-session[4149]: pam_unix(sshd:session): session closed for user core May 16 00:16:11.522954 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:51908.service: Deactivated successfully. May 16 00:16:11.525175 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:16:11.525847 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. May 16 00:16:11.526650 systemd-logind[1494]: Removed session 15. May 16 00:16:16.533020 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). May 16 00:16:16.568852 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:16.570450 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:16.574450 systemd-logind[1494]: New session 16 of user core. May 16 00:16:16.585364 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:16:16.704779 sshd[4166]: Connection closed by 10.0.0.1 port 51516 May 16 00:16:16.705198 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 00:16:16.709275 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:51516.service: Deactivated successfully. May 16 00:16:16.711136 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:16:16.711874 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. May 16 00:16:16.712922 systemd-logind[1494]: Removed session 16. May 16 00:16:21.719036 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:51532.service - OpenSSH per-connection server daemon (10.0.0.1:51532). May 16 00:16:21.754512 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 51532 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:21.756205 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:21.760378 systemd-logind[1494]: New session 17 of user core. May 16 00:16:21.770355 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:16:21.873632 sshd[4186]: Connection closed by 10.0.0.1 port 51532 May 16 00:16:21.874089 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 16 00:16:21.887181 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:51532.service: Deactivated successfully. May 16 00:16:21.889391 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:16:21.891324 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. May 16 00:16:21.896516 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:51538.service - OpenSSH per-connection server daemon (10.0.0.1:51538). May 16 00:16:21.897549 systemd-logind[1494]: Removed session 17. May 16 00:16:21.929234 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 51538 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:21.930798 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:21.935263 systemd-logind[1494]: New session 18 of user core. May 16 00:16:21.946488 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:16:22.247084 sshd[4202]: Connection closed by 10.0.0.1 port 51538 May 16 00:16:22.247700 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 16 00:16:22.256272 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:51538.service: Deactivated successfully. May 16 00:16:22.258168 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:16:22.259882 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. May 16 00:16:22.265491 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:51540.service - OpenSSH per-connection server daemon (10.0.0.1:51540). May 16 00:16:22.266556 systemd-logind[1494]: Removed session 18. May 16 00:16:22.301781 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 51540 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:22.303149 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:22.307622 systemd-logind[1494]: New session 19 of user core. May 16 00:16:22.317472 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:16:23.044867 sshd[4216]: Connection closed by 10.0.0.1 port 51540 May 16 00:16:23.045465 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 16 00:16:23.056450 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:51540.service: Deactivated successfully. May 16 00:16:23.058605 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:16:23.059909 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. May 16 00:16:23.072848 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:51548.service - OpenSSH per-connection server daemon (10.0.0.1:51548). May 16 00:16:23.074066 systemd-logind[1494]: Removed session 19. May 16 00:16:23.106060 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:23.107556 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:23.112043 systemd-logind[1494]: New session 20 of user core. May 16 00:16:23.126341 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:16:23.355901 sshd[4241]: Connection closed by 10.0.0.1 port 51548 May 16 00:16:23.356364 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 16 00:16:23.365287 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:51548.service: Deactivated successfully. May 16 00:16:23.367658 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:16:23.369128 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. May 16 00:16:23.389680 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:51558.service - OpenSSH per-connection server daemon (10.0.0.1:51558). May 16 00:16:23.391031 systemd-logind[1494]: Removed session 20. May 16 00:16:23.420991 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 51558 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:23.422798 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:23.428270 systemd-logind[1494]: New session 21 of user core. May 16 00:16:23.433382 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:16:23.551320 sshd[4254]: Connection closed by 10.0.0.1 port 51558 May 16 00:16:23.551707 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 16 00:16:23.555998 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:51558.service: Deactivated successfully. May 16 00:16:23.558185 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:16:23.558965 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. May 16 00:16:23.559810 systemd-logind[1494]: Removed session 21. May 16 00:16:28.568599 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:51468.service - OpenSSH per-connection server daemon (10.0.0.1:51468). May 16 00:16:28.604267 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 51468 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:28.605616 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:28.609722 systemd-logind[1494]: New session 22 of user core. May 16 00:16:28.619413 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:16:28.732650 sshd[4272]: Connection closed by 10.0.0.1 port 51468 May 16 00:16:28.733020 sshd-session[4270]: pam_unix(sshd:session): session closed for user core May 16 00:16:28.737211 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:51468.service: Deactivated successfully. May 16 00:16:28.739401 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:16:28.740072 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. May 16 00:16:28.741111 systemd-logind[1494]: Removed session 22. May 16 00:16:33.747967 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). May 16 00:16:33.783231 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:33.784622 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:33.788437 systemd-logind[1494]: New session 23 of user core. May 16 00:16:33.798422 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:16:33.903128 sshd[4289]: Connection closed by 10.0.0.1 port 49666 May 16 00:16:33.903503 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 16 00:16:33.907179 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:49666.service: Deactivated successfully. May 16 00:16:33.909387 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:16:33.910098 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. May 16 00:16:33.910939 systemd-logind[1494]: Removed session 23. May 16 00:16:38.510625 kubelet[2618]: E0516 00:16:38.510564 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:38.925578 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). May 16 00:16:38.961147 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:38.962879 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:38.967672 systemd-logind[1494]: New session 24 of user core. May 16 00:16:38.978521 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:16:39.098518 sshd[4304]: Connection closed by 10.0.0.1 port 49674 May 16 00:16:39.098872 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 16 00:16:39.102474 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:49674.service: Deactivated successfully. May 16 00:16:39.104575 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:16:39.105393 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. May 16 00:16:39.106391 systemd-logind[1494]: Removed session 24. May 16 00:16:44.111213 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:34244.service - OpenSSH per-connection server daemon (10.0.0.1:34244). May 16 00:16:44.145747 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:44.147278 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:44.150937 systemd-logind[1494]: New session 25 of user core. May 16 00:16:44.157361 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:16:44.270385 sshd[4319]: Connection closed by 10.0.0.1 port 34244 May 16 00:16:44.270841 sshd-session[4317]: pam_unix(sshd:session): session closed for user core May 16 00:16:44.283885 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:34244.service: Deactivated successfully. May 16 00:16:44.285603 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:16:44.287015 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. May 16 00:16:44.293796 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:34256.service - OpenSSH per-connection server daemon (10.0.0.1:34256). May 16 00:16:44.294827 systemd-logind[1494]: Removed session 25. May 16 00:16:44.327279 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 34256 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:44.328990 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:44.333496 systemd-logind[1494]: New session 26 of user core. May 16 00:16:44.343380 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:16:45.887749 containerd[1506]: time="2025-05-16T00:16:45.887645960Z" level=info msg="StopContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" with timeout 30 (s)" May 16 00:16:45.889065 containerd[1506]: time="2025-05-16T00:16:45.888893773Z" level=info msg="Stop container \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" with signal terminated" May 16 00:16:45.906121 systemd[1]: cri-containerd-be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4.scope: Deactivated successfully. May 16 00:16:45.918812 containerd[1506]: time="2025-05-16T00:16:45.918756229Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:16:45.921508 containerd[1506]: time="2025-05-16T00:16:45.921486234Z" level=info msg="StopContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" with timeout 2 (s)" May 16 00:16:45.921829 containerd[1506]: time="2025-05-16T00:16:45.921813941Z" level=info msg="Stop container \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" with signal terminated" May 16 00:16:45.927685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4-rootfs.mount: Deactivated successfully. May 16 00:16:45.930016 systemd-networkd[1424]: lxc_health: Link DOWN May 16 00:16:45.930026 systemd-networkd[1424]: lxc_health: Lost carrier May 16 00:16:45.935877 containerd[1506]: time="2025-05-16T00:16:45.935819275Z" level=info msg="shim disconnected" id=be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4 namespace=k8s.io May 16 00:16:45.936009 containerd[1506]: time="2025-05-16T00:16:45.935876765Z" level=warning msg="cleaning up after shim disconnected" id=be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4 namespace=k8s.io May 16 00:16:45.936009 containerd[1506]: time="2025-05-16T00:16:45.935896242Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:45.947640 systemd[1]: cri-containerd-7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514.scope: Deactivated successfully. May 16 00:16:45.948007 systemd[1]: cri-containerd-7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514.scope: Consumed 6.975s CPU time, 126.2M memory peak, 224K read from disk, 13.3M written to disk. May 16 00:16:45.954823 containerd[1506]: time="2025-05-16T00:16:45.954773863Z" level=info msg="StopContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" returns successfully" May 16 00:16:45.958726 containerd[1506]: time="2025-05-16T00:16:45.958690325Z" level=info msg="StopPodSandbox for \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\"" May 16 00:16:45.968642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514-rootfs.mount: Deactivated successfully. May 16 00:16:45.972570 containerd[1506]: time="2025-05-16T00:16:45.958730742Z" level=info msg="Container to stop \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:45.974402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903-shm.mount: Deactivated successfully. May 16 00:16:45.975648 containerd[1506]: time="2025-05-16T00:16:45.975585379Z" level=info msg="shim disconnected" id=7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514 namespace=k8s.io May 16 00:16:45.975648 containerd[1506]: time="2025-05-16T00:16:45.975639883Z" level=warning msg="cleaning up after shim disconnected" id=7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514 namespace=k8s.io May 16 00:16:45.975733 containerd[1506]: time="2025-05-16T00:16:45.975650234Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:45.980409 systemd[1]: cri-containerd-a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903.scope: Deactivated successfully. May 16 00:16:46.002128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903-rootfs.mount: Deactivated successfully. May 16 00:16:46.051025 containerd[1506]: time="2025-05-16T00:16:46.050967628Z" level=info msg="StopContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" returns successfully" May 16 00:16:46.051576 containerd[1506]: time="2025-05-16T00:16:46.051546534Z" level=info msg="StopPodSandbox for \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\"" May 16 00:16:46.051661 containerd[1506]: time="2025-05-16T00:16:46.051595748Z" level=info msg="Container to stop \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:46.051661 containerd[1506]: time="2025-05-16T00:16:46.051627809Z" level=info msg="Container to stop \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:46.051661 containerd[1506]: time="2025-05-16T00:16:46.051636165Z" level=info msg="Container to stop \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:46.051661 containerd[1506]: time="2025-05-16T00:16:46.051644390Z" level=info msg="Container to stop \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:46.051661 containerd[1506]: time="2025-05-16T00:16:46.051653557Z" level=info msg="Container to stop \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:16:46.054641 containerd[1506]: time="2025-05-16T00:16:46.054562803Z" level=info msg="shim disconnected" id=a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903 namespace=k8s.io May 16 00:16:46.054641 containerd[1506]: time="2025-05-16T00:16:46.054606386Z" level=warning msg="cleaning up after shim disconnected" id=a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903 namespace=k8s.io May 16 00:16:46.054641 containerd[1506]: time="2025-05-16T00:16:46.054617538Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:46.059083 systemd[1]: cri-containerd-1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9.scope: Deactivated successfully. May 16 00:16:46.069083 containerd[1506]: time="2025-05-16T00:16:46.069052720Z" level=info msg="TearDown network for sandbox \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\" successfully" May 16 00:16:46.069083 containerd[1506]: time="2025-05-16T00:16:46.069073740Z" level=info msg="StopPodSandbox for \"a15515d957ba4b7481985e05f6135848587d68807a1cfc6e634815fa42d70903\" returns successfully" May 16 00:16:46.084007 containerd[1506]: time="2025-05-16T00:16:46.083803856Z" level=info msg="shim disconnected" id=1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9 namespace=k8s.io May 16 00:16:46.084007 containerd[1506]: time="2025-05-16T00:16:46.083857639Z" level=warning msg="cleaning up after shim disconnected" id=1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9 namespace=k8s.io May 16 00:16:46.084007 containerd[1506]: time="2025-05-16T00:16:46.083867137Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:46.097161 containerd[1506]: time="2025-05-16T00:16:46.097125954Z" level=info msg="TearDown network for sandbox \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" successfully" May 16 00:16:46.097161 containerd[1506]: time="2025-05-16T00:16:46.097149258Z" level=info msg="StopPodSandbox for \"1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9\" returns successfully" May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227568 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-lib-modules\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227622 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-kernel\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227645 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-xtables-lock\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227664 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-run\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227681 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-hostproc\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.227722 kubelet[2618]: I0516 00:16:46.227706 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvhn7\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-kube-api-access-nvhn7\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228439 kubelet[2618]: I0516 00:16:46.227723 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-bpf-maps\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228439 kubelet[2618]: I0516 00:16:46.227729 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.228439 kubelet[2618]: I0516 00:16:46.227749 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.228439 kubelet[2618]: I0516 00:16:46.227781 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-hostproc" (OuterVolumeSpecName: "hostproc") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.228439 kubelet[2618]: I0516 00:16:46.227741 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71100fa-0210-4093-b377-75bc2bdb1e2e-cilium-config-path\") pod \"b71100fa-0210-4093-b377-75bc2bdb1e2e\" (UID: \"b71100fa-0210-4093-b377-75bc2bdb1e2e\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227843 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cni-path\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227864 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-net\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227896 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-config-path\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227911 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-etc-cni-netd\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227929 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqvbn\" (UniqueName: \"kubernetes.io/projected/b71100fa-0210-4093-b377-75bc2bdb1e2e-kube-api-access-nqvbn\") pod \"b71100fa-0210-4093-b377-75bc2bdb1e2e\" (UID: \"b71100fa-0210-4093-b377-75bc2bdb1e2e\") " May 16 00:16:46.228618 kubelet[2618]: I0516 00:16:46.227943 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-cgroup\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228802 kubelet[2618]: I0516 00:16:46.227958 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34120b60-493f-4364-85eb-7f0e69e4dd3d-clustermesh-secrets\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228802 kubelet[2618]: I0516 00:16:46.227973 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-hubble-tls\") pod \"34120b60-493f-4364-85eb-7f0e69e4dd3d\" (UID: \"34120b60-493f-4364-85eb-7f0e69e4dd3d\") " May 16 00:16:46.228802 kubelet[2618]: I0516 00:16:46.228025 2618 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.228802 kubelet[2618]: I0516 00:16:46.228035 2618 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.228802 kubelet[2618]: I0516 00:16:46.228043 2618 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.231526 kubelet[2618]: I0516 00:16:46.227798 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.231677 kubelet[2618]: I0516 00:16:46.227809 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.231726 kubelet[2618]: I0516 00:16:46.228560 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.231772 kubelet[2618]: I0516 00:16:46.231437 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-kube-api-access-nvhn7" (OuterVolumeSpecName: "kube-api-access-nvhn7") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "kube-api-access-nvhn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:16:46.231827 kubelet[2618]: I0516 00:16:46.231464 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.232113 kubelet[2618]: I0516 00:16:46.231481 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.232113 kubelet[2618]: I0516 00:16:46.231499 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cni-path" (OuterVolumeSpecName: "cni-path") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.232113 kubelet[2618]: I0516 00:16:46.231751 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71100fa-0210-4093-b377-75bc2bdb1e2e-kube-api-access-nqvbn" (OuterVolumeSpecName: "kube-api-access-nqvbn") pod "b71100fa-0210-4093-b377-75bc2bdb1e2e" (UID: "b71100fa-0210-4093-b377-75bc2bdb1e2e"). InnerVolumeSpecName "kube-api-access-nqvbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:16:46.232813 kubelet[2618]: I0516 00:16:46.232793 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:16:46.233176 kubelet[2618]: I0516 00:16:46.233146 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:16:46.233468 kubelet[2618]: I0516 00:16:46.233435 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:16:46.234501 kubelet[2618]: I0516 00:16:46.234479 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34120b60-493f-4364-85eb-7f0e69e4dd3d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34120b60-493f-4364-85eb-7f0e69e4dd3d" (UID: "34120b60-493f-4364-85eb-7f0e69e4dd3d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:16:46.234565 kubelet[2618]: I0516 00:16:46.234543 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71100fa-0210-4093-b377-75bc2bdb1e2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b71100fa-0210-4093-b377-75bc2bdb1e2e" (UID: "b71100fa-0210-4093-b377-75bc2bdb1e2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:16:46.328240 kubelet[2618]: I0516 00:16:46.328184 2618 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328240 kubelet[2618]: I0516 00:16:46.328212 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328240 kubelet[2618]: I0516 00:16:46.328234 2618 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvhn7\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-kube-api-access-nvhn7\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328240 kubelet[2618]: I0516 00:16:46.328244 2618 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328240 kubelet[2618]: I0516 00:16:46.328252 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71100fa-0210-4093-b377-75bc2bdb1e2e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328261 2618 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328271 2618 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328280 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328288 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328296 2618 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34120b60-493f-4364-85eb-7f0e69e4dd3d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328303 2618 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqvbn\" (UniqueName: \"kubernetes.io/projected/b71100fa-0210-4093-b377-75bc2bdb1e2e-kube-api-access-nqvbn\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328311 2618 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34120b60-493f-4364-85eb-7f0e69e4dd3d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.328485 kubelet[2618]: I0516 00:16:46.328319 2618 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34120b60-493f-4364-85eb-7f0e69e4dd3d-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:16:46.518650 systemd[1]: Removed slice kubepods-burstable-pod34120b60_493f_4364_85eb_7f0e69e4dd3d.slice - libcontainer container kubepods-burstable-pod34120b60_493f_4364_85eb_7f0e69e4dd3d.slice. May 16 00:16:46.518775 systemd[1]: kubepods-burstable-pod34120b60_493f_4364_85eb_7f0e69e4dd3d.slice: Consumed 7.083s CPU time, 126.5M memory peak, 244K read from disk, 13.3M written to disk. May 16 00:16:46.519947 systemd[1]: Removed slice kubepods-besteffort-podb71100fa_0210_4093_b377_75bc2bdb1e2e.slice - libcontainer container kubepods-besteffort-podb71100fa_0210_4093_b377_75bc2bdb1e2e.slice. May 16 00:16:46.738284 kubelet[2618]: I0516 00:16:46.738247 2618 scope.go:117] "RemoveContainer" containerID="be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4" May 16 00:16:46.745365 containerd[1506]: time="2025-05-16T00:16:46.745309709Z" level=info msg="RemoveContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\"" May 16 00:16:46.767338 containerd[1506]: time="2025-05-16T00:16:46.767293315Z" level=info msg="RemoveContainer for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" returns successfully" May 16 00:16:46.767604 kubelet[2618]: I0516 00:16:46.767578 2618 scope.go:117] "RemoveContainer" containerID="be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4" May 16 00:16:46.767951 containerd[1506]: time="2025-05-16T00:16:46.767883612Z" level=error msg="ContainerStatus for \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\": not found" May 16 00:16:46.773647 kubelet[2618]: E0516 00:16:46.773490 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\": not found" containerID="be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4" May 16 00:16:46.773647 kubelet[2618]: I0516 00:16:46.773522 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4"} err="failed to get container status \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\": rpc error: code = NotFound desc = an error occurred when try to find container \"be18ade58cf26e6c63165ed3c36c6c2432714741eb70caab412c72362cb90ca4\": not found" May 16 00:16:46.773647 kubelet[2618]: I0516 00:16:46.773592 2618 scope.go:117] "RemoveContainer" containerID="7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514" May 16 00:16:46.774900 containerd[1506]: time="2025-05-16T00:16:46.774856046Z" level=info msg="RemoveContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\"" May 16 00:16:46.786450 containerd[1506]: time="2025-05-16T00:16:46.786411550Z" level=info msg="RemoveContainer for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" returns successfully" May 16 00:16:46.786627 kubelet[2618]: I0516 00:16:46.786605 2618 scope.go:117] "RemoveContainer" containerID="363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186" May 16 00:16:46.787348 containerd[1506]: time="2025-05-16T00:16:46.787323551Z" level=info msg="RemoveContainer for \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\"" May 16 00:16:46.790394 containerd[1506]: time="2025-05-16T00:16:46.790365359Z" level=info msg="RemoveContainer for \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\" returns successfully" May 16 00:16:46.790506 kubelet[2618]: I0516 00:16:46.790487 2618 scope.go:117] "RemoveContainer" containerID="291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9" May 16 00:16:46.791490 containerd[1506]: time="2025-05-16T00:16:46.791284644Z" level=info msg="RemoveContainer for \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\"" May 16 00:16:46.794749 containerd[1506]: time="2025-05-16T00:16:46.794712690Z" level=info msg="RemoveContainer for \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\" returns successfully" May 16 00:16:46.794884 kubelet[2618]: I0516 00:16:46.794845 2618 scope.go:117] "RemoveContainer" containerID="b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633" May 16 00:16:46.795530 containerd[1506]: time="2025-05-16T00:16:46.795509932Z" level=info msg="RemoveContainer for \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\"" May 16 00:16:46.798591 containerd[1506]: time="2025-05-16T00:16:46.798562741Z" level=info msg="RemoveContainer for \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\" returns successfully" May 16 00:16:46.798695 kubelet[2618]: I0516 00:16:46.798678 2618 scope.go:117] "RemoveContainer" containerID="5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c" May 16 00:16:46.799346 containerd[1506]: time="2025-05-16T00:16:46.799325057Z" level=info msg="RemoveContainer for \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\"" May 16 00:16:46.802335 containerd[1506]: time="2025-05-16T00:16:46.802311259Z" level=info msg="RemoveContainer for \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\" returns successfully" May 16 00:16:46.802439 kubelet[2618]: I0516 00:16:46.802424 2618 scope.go:117] "RemoveContainer" containerID="7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514" May 16 00:16:46.802604 containerd[1506]: time="2025-05-16T00:16:46.802575113Z" level=error msg="ContainerStatus for \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\": not found" May 16 00:16:46.802735 kubelet[2618]: E0516 00:16:46.802702 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\": not found" containerID="7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514" May 16 00:16:46.802783 kubelet[2618]: I0516 00:16:46.802739 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514"} err="failed to get container status \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c480d98c5a8854f5f670974c3c196a735c8a41aa70bf3422373ff2b1b86e514\": not found" May 16 00:16:46.802783 kubelet[2618]: I0516 00:16:46.802766 2618 scope.go:117] "RemoveContainer" containerID="363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186" May 16 00:16:46.802967 containerd[1506]: time="2025-05-16T00:16:46.802942734Z" level=error msg="ContainerStatus for \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\": not found" May 16 00:16:46.803076 kubelet[2618]: E0516 00:16:46.803057 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\": not found" containerID="363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186" May 16 00:16:46.803105 kubelet[2618]: I0516 00:16:46.803082 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186"} err="failed to get container status \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\": rpc error: code = NotFound desc = an error occurred when try to find container \"363ed005e546c0846c61587ecbcdbc7c542b643c5ad4aad5d499fae438cf0186\": not found" May 16 00:16:46.803134 kubelet[2618]: I0516 00:16:46.803102 2618 scope.go:117] "RemoveContainer" containerID="291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9" May 16 00:16:46.803278 containerd[1506]: time="2025-05-16T00:16:46.803250412Z" level=error msg="ContainerStatus for \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\": not found" May 16 00:16:46.803355 kubelet[2618]: E0516 00:16:46.803343 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\": not found" containerID="291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9" May 16 00:16:46.803420 kubelet[2618]: I0516 00:16:46.803362 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9"} err="failed to get container status \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\": rpc error: code = NotFound desc = an error occurred when try to find container \"291f7f87929099e55b8cfc7dc072c82775c87c58742cfeaaae842a96922d3df9\": not found" May 16 00:16:46.803420 kubelet[2618]: I0516 00:16:46.803376 2618 scope.go:117] "RemoveContainer" containerID="b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633" May 16 00:16:46.803541 containerd[1506]: time="2025-05-16T00:16:46.803512162Z" level=error msg="ContainerStatus for \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\": not found" May 16 00:16:46.803639 kubelet[2618]: E0516 00:16:46.803621 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\": not found" containerID="b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633" May 16 00:16:46.803680 kubelet[2618]: I0516 00:16:46.803643 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633"} err="failed to get container status \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2d3d3b49bf1a441484e9522f6169f3654f0328fc24922c762ffa1299eead633\": not found" May 16 00:16:46.803680 kubelet[2618]: I0516 00:16:46.803657 2618 scope.go:117] "RemoveContainer" containerID="5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c" May 16 00:16:46.803825 containerd[1506]: time="2025-05-16T00:16:46.803802727Z" level=error msg="ContainerStatus for \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\": not found" May 16 00:16:46.803912 kubelet[2618]: E0516 00:16:46.803887 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\": not found" containerID="5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c" May 16 00:16:46.803959 kubelet[2618]: I0516 00:16:46.803914 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c"} err="failed to get container status \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d5f0124290ef28461fd5eb14ef2a93c8fad6f8ca2156db1b053844f42eb586c\": not found" May 16 00:16:46.896921 systemd[1]: var-lib-kubelet-pods-b71100fa\x2d0210\x2d4093\x2db377\x2d75bc2bdb1e2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqvbn.mount: Deactivated successfully. May 16 00:16:46.897029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9-rootfs.mount: Deactivated successfully. May 16 00:16:46.897107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1328e7627ec24de76b9c59b4e36c032e61c64e8778ec7729a82e49b805781bf9-shm.mount: Deactivated successfully. May 16 00:16:46.897193 systemd[1]: var-lib-kubelet-pods-34120b60\x2d493f\x2d4364\x2d85eb\x2d7f0e69e4dd3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvhn7.mount: Deactivated successfully. May 16 00:16:46.897297 systemd[1]: var-lib-kubelet-pods-34120b60\x2d493f\x2d4364\x2d85eb\x2d7f0e69e4dd3d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:16:46.897379 systemd[1]: var-lib-kubelet-pods-34120b60\x2d493f\x2d4364\x2d85eb\x2d7f0e69e4dd3d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:16:47.716405 sshd[4334]: Connection closed by 10.0.0.1 port 34256 May 16 00:16:47.716982 sshd-session[4331]: pam_unix(sshd:session): session closed for user core May 16 00:16:47.731268 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:34256.service: Deactivated successfully. May 16 00:16:47.733524 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:16:47.734992 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. May 16 00:16:47.742557 systemd[1]: Started sshd@26-10.0.0.135:22-10.0.0.1:34258.service - OpenSSH per-connection server daemon (10.0.0.1:34258). May 16 00:16:47.744196 systemd-logind[1494]: Removed session 26. May 16 00:16:47.779130 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 34258 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:47.780875 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:47.785367 systemd-logind[1494]: New session 27 of user core. May 16 00:16:47.800386 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:16:48.134170 sshd[4498]: Connection closed by 10.0.0.1 port 34258 May 16 00:16:48.136731 sshd-session[4495]: pam_unix(sshd:session): session closed for user core May 16 00:16:48.144182 kubelet[2618]: I0516 00:16:48.144138 2618 memory_manager.go:355] "RemoveStaleState removing state" podUID="b71100fa-0210-4093-b377-75bc2bdb1e2e" containerName="cilium-operator" May 16 00:16:48.144182 kubelet[2618]: I0516 00:16:48.144171 2618 memory_manager.go:355] "RemoveStaleState removing state" podUID="34120b60-493f-4364-85eb-7f0e69e4dd3d" containerName="cilium-agent" May 16 00:16:48.151864 systemd[1]: sshd@26-10.0.0.135:22-10.0.0.1:34258.service: Deactivated successfully. May 16 00:16:48.156041 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:16:48.159829 systemd-logind[1494]: Session 27 logged out. Waiting for processes to exit. May 16 00:16:48.177537 systemd[1]: Started sshd@27-10.0.0.135:22-10.0.0.1:34274.service - OpenSSH per-connection server daemon (10.0.0.1:34274). May 16 00:16:48.181457 systemd-logind[1494]: Removed session 27. May 16 00:16:48.186654 systemd[1]: Created slice kubepods-burstable-podf243dbab_9ca6_42f3_95d8_c69a0c90dc2e.slice - libcontainer container kubepods-burstable-podf243dbab_9ca6_42f3_95d8_c69a0c90dc2e.slice. May 16 00:16:48.228373 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 34274 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:48.230041 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:48.234186 systemd-logind[1494]: New session 28 of user core. May 16 00:16:48.238805 kubelet[2618]: I0516 00:16:48.238769 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-cilium-run\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238805 kubelet[2618]: I0516 00:16:48.238800 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-cni-path\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238819 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-etc-cni-netd\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238834 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-bpf-maps\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238849 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-cilium-cgroup\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238866 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-hubble-tls\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238883 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-hostproc\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.238898 kubelet[2618]: I0516 00:16:48.238898 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-host-proc-sys-net\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239110 kubelet[2618]: I0516 00:16:48.238924 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghms\" (UniqueName: \"kubernetes.io/projected/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-kube-api-access-sghms\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239110 kubelet[2618]: I0516 00:16:48.238941 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-host-proc-sys-kernel\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239110 kubelet[2618]: I0516 00:16:48.238955 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-cilium-config-path\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239110 kubelet[2618]: I0516 00:16:48.238969 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-cilium-ipsec-secrets\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239110 kubelet[2618]: I0516 00:16:48.239023 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-lib-modules\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239273 kubelet[2618]: I0516 00:16:48.239058 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-xtables-lock\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.239273 kubelet[2618]: I0516 00:16:48.239072 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f243dbab-9ca6-42f3-95d8-c69a0c90dc2e-clustermesh-secrets\") pod \"cilium-w496f\" (UID: \"f243dbab-9ca6-42f3-95d8-c69a0c90dc2e\") " pod="kube-system/cilium-w496f" May 16 00:16:48.241347 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:16:48.291147 sshd[4512]: Connection closed by 10.0.0.1 port 34274 May 16 00:16:48.291546 sshd-session[4509]: pam_unix(sshd:session): session closed for user core May 16 00:16:48.305012 systemd[1]: sshd@27-10.0.0.135:22-10.0.0.1:34274.service: Deactivated successfully. May 16 00:16:48.306724 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:16:48.308462 systemd-logind[1494]: Session 28 logged out. Waiting for processes to exit. May 16 00:16:48.316451 systemd[1]: Started sshd@28-10.0.0.135:22-10.0.0.1:34276.service - OpenSSH per-connection server daemon (10.0.0.1:34276). May 16 00:16:48.317408 systemd-logind[1494]: Removed session 28. May 16 00:16:48.354048 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 34276 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:16:48.355769 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:48.364495 systemd-logind[1494]: New session 29 of user core. May 16 00:16:48.374403 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 00:16:48.493862 kubelet[2618]: E0516 00:16:48.493670 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:48.494743 containerd[1506]: time="2025-05-16T00:16:48.494295865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w496f,Uid:f243dbab-9ca6-42f3-95d8-c69a0c90dc2e,Namespace:kube-system,Attempt:0,}" May 16 00:16:48.513247 kubelet[2618]: I0516 00:16:48.512981 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34120b60-493f-4364-85eb-7f0e69e4dd3d" path="/var/lib/kubelet/pods/34120b60-493f-4364-85eb-7f0e69e4dd3d/volumes" May 16 00:16:48.513811 kubelet[2618]: I0516 00:16:48.513791 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71100fa-0210-4093-b377-75bc2bdb1e2e" path="/var/lib/kubelet/pods/b71100fa-0210-4093-b377-75bc2bdb1e2e/volumes" May 16 00:16:48.515133 containerd[1506]: time="2025-05-16T00:16:48.515039299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:16:48.515196 containerd[1506]: time="2025-05-16T00:16:48.515141595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:16:48.515196 containerd[1506]: time="2025-05-16T00:16:48.515164187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:16:48.515324 containerd[1506]: time="2025-05-16T00:16:48.515259559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:16:48.536351 systemd[1]: Started cri-containerd-e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb.scope - libcontainer container e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb. May 16 00:16:48.556057 containerd[1506]: time="2025-05-16T00:16:48.556020451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w496f,Uid:f243dbab-9ca6-42f3-95d8-c69a0c90dc2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\"" May 16 00:16:48.556958 kubelet[2618]: E0516 00:16:48.556932 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:48.559135 containerd[1506]: time="2025-05-16T00:16:48.559024343Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:16:48.572562 containerd[1506]: time="2025-05-16T00:16:48.572531110Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1\"" May 16 00:16:48.573511 containerd[1506]: time="2025-05-16T00:16:48.572880447Z" level=info msg="StartContainer for \"77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1\"" May 16 00:16:48.600345 systemd[1]: Started cri-containerd-77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1.scope - libcontainer container 77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1. May 16 00:16:48.626277 containerd[1506]: time="2025-05-16T00:16:48.626243380Z" level=info msg="StartContainer for \"77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1\" returns successfully" May 16 00:16:48.634706 systemd[1]: cri-containerd-77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1.scope: Deactivated successfully. May 16 00:16:48.664153 containerd[1506]: time="2025-05-16T00:16:48.664095843Z" level=info msg="shim disconnected" id=77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1 namespace=k8s.io May 16 00:16:48.664153 containerd[1506]: time="2025-05-16T00:16:48.664145619Z" level=warning msg="cleaning up after shim disconnected" id=77ec107e009f0979c2020a7c5e219d5b8727ff834134e8ca549c9232a00365c1 namespace=k8s.io May 16 00:16:48.664153 containerd[1506]: time="2025-05-16T00:16:48.664153674Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:48.749549 kubelet[2618]: E0516 00:16:48.749427 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:48.750949 containerd[1506]: time="2025-05-16T00:16:48.750875249Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:16:48.763930 containerd[1506]: time="2025-05-16T00:16:48.763875349Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5\"" May 16 00:16:48.764733 containerd[1506]: time="2025-05-16T00:16:48.764623787Z" level=info msg="StartContainer for \"cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5\"" May 16 00:16:48.790357 systemd[1]: Started cri-containerd-cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5.scope - libcontainer container cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5. May 16 00:16:48.815880 containerd[1506]: time="2025-05-16T00:16:48.815829145Z" level=info msg="StartContainer for \"cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5\" returns successfully" May 16 00:16:48.822590 systemd[1]: cri-containerd-cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5.scope: Deactivated successfully. May 16 00:16:48.845825 containerd[1506]: time="2025-05-16T00:16:48.845749466Z" level=info msg="shim disconnected" id=cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5 namespace=k8s.io May 16 00:16:48.845825 containerd[1506]: time="2025-05-16T00:16:48.845811284Z" level=warning msg="cleaning up after shim disconnected" id=cc7e613ee799f4d3bbb58415bdee8ced84e88ee23239c35eacfeba8fd98fe5a5 namespace=k8s.io May 16 00:16:48.845825 containerd[1506]: time="2025-05-16T00:16:48.845820070Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:49.752734 kubelet[2618]: E0516 00:16:49.752702 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:49.755999 containerd[1506]: time="2025-05-16T00:16:49.755959044Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:16:49.774096 containerd[1506]: time="2025-05-16T00:16:49.774038902Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36\"" May 16 00:16:49.774750 containerd[1506]: time="2025-05-16T00:16:49.774706665Z" level=info msg="StartContainer for \"30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36\"" May 16 00:16:49.813352 systemd[1]: Started cri-containerd-30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36.scope - libcontainer container 30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36. May 16 00:16:49.841723 containerd[1506]: time="2025-05-16T00:16:49.841681404Z" level=info msg="StartContainer for \"30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36\" returns successfully" May 16 00:16:49.843747 systemd[1]: cri-containerd-30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36.scope: Deactivated successfully. May 16 00:16:49.869825 containerd[1506]: time="2025-05-16T00:16:49.869761863Z" level=info msg="shim disconnected" id=30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36 namespace=k8s.io May 16 00:16:49.869825 containerd[1506]: time="2025-05-16T00:16:49.869814603Z" level=warning msg="cleaning up after shim disconnected" id=30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36 namespace=k8s.io May 16 00:16:49.869825 containerd[1506]: time="2025-05-16T00:16:49.869824843Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:50.345194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30820df396d8236a9ece2d720ece4139094f1d630f713d1c5318b61735082d36-rootfs.mount: Deactivated successfully. May 16 00:16:50.565593 kubelet[2618]: E0516 00:16:50.565520 2618 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:16:50.756714 kubelet[2618]: E0516 00:16:50.756588 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:50.758501 containerd[1506]: time="2025-05-16T00:16:50.758339548Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:16:50.922759 containerd[1506]: time="2025-05-16T00:16:50.922715753Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c\"" May 16 00:16:50.923335 containerd[1506]: time="2025-05-16T00:16:50.923181701Z" level=info msg="StartContainer for \"39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c\"" May 16 00:16:50.953346 systemd[1]: Started cri-containerd-39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c.scope - libcontainer container 39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c. May 16 00:16:50.977380 systemd[1]: cri-containerd-39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c.scope: Deactivated successfully. May 16 00:16:50.978990 containerd[1506]: time="2025-05-16T00:16:50.978959745Z" level=info msg="StartContainer for \"39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c\" returns successfully" May 16 00:16:51.000944 containerd[1506]: time="2025-05-16T00:16:51.000876414Z" level=info msg="shim disconnected" id=39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c namespace=k8s.io May 16 00:16:51.000944 containerd[1506]: time="2025-05-16T00:16:51.000933692Z" level=warning msg="cleaning up after shim disconnected" id=39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c namespace=k8s.io May 16 00:16:51.000944 containerd[1506]: time="2025-05-16T00:16:51.000943491Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:16:51.345278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bd3098e6d28dfc5644d02707a0d5a4f00ad74bcd6d5eee9b7e2c0f5aaab54c-rootfs.mount: Deactivated successfully. May 16 00:16:51.760827 kubelet[2618]: E0516 00:16:51.760726 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:51.762334 containerd[1506]: time="2025-05-16T00:16:51.762301554Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:16:51.794583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060836247.mount: Deactivated successfully. May 16 00:16:51.795730 containerd[1506]: time="2025-05-16T00:16:51.795689654Z" level=info msg="CreateContainer within sandbox \"e878952ba43d75af71e09472b11bba6fed5c0cc3947940da3bf396be5ed367eb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14b4fed286f519f74f7c12549de7f31702096e830c5753e32337f4dbb488e89b\"" May 16 00:16:51.796269 containerd[1506]: time="2025-05-16T00:16:51.796202531Z" level=info msg="StartContainer for \"14b4fed286f519f74f7c12549de7f31702096e830c5753e32337f4dbb488e89b\"" May 16 00:16:51.828339 systemd[1]: Started cri-containerd-14b4fed286f519f74f7c12549de7f31702096e830c5753e32337f4dbb488e89b.scope - libcontainer container 14b4fed286f519f74f7c12549de7f31702096e830c5753e32337f4dbb488e89b. May 16 00:16:51.855888 containerd[1506]: time="2025-05-16T00:16:51.855842483Z" level=info msg="StartContainer for \"14b4fed286f519f74f7c12549de7f31702096e830c5753e32337f4dbb488e89b\" returns successfully" May 16 00:16:52.256248 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:16:52.765002 kubelet[2618]: E0516 00:16:52.764966 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:52.827543 kubelet[2618]: I0516 00:16:52.827475 2618 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:16:52Z","lastTransitionTime":"2025-05-16T00:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:16:54.494468 kubelet[2618]: E0516 00:16:54.494375 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:55.317354 systemd-networkd[1424]: lxc_health: Link UP May 16 00:16:55.327070 systemd-networkd[1424]: lxc_health: Gained carrier May 16 00:16:55.510371 kubelet[2618]: E0516 00:16:55.510328 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:56.496053 kubelet[2618]: E0516 00:16:56.496012 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:56.511233 kubelet[2618]: E0516 00:16:56.511172 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:56.513489 kubelet[2618]: I0516 00:16:56.512676 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w496f" podStartSLOduration=8.512664844 podStartE2EDuration="8.512664844s" podCreationTimestamp="2025-05-16 00:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:16:52.778576452 +0000 UTC m=+92.344060417" watchObservedRunningTime="2025-05-16 00:16:56.512664844 +0000 UTC m=+96.078148809" May 16 00:16:56.726356 systemd-networkd[1424]: lxc_health: Gained IPv6LL May 16 00:16:56.772648 kubelet[2618]: E0516 00:16:56.772528 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:57.774685 kubelet[2618]: E0516 00:16:57.774641 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:17:00.511038 kubelet[2618]: E0516 00:17:00.510995 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:17:03.072067 sshd[4528]: Connection closed by 10.0.0.1 port 34276 May 16 00:17:03.072575 sshd-session[4519]: pam_unix(sshd:session): session closed for user core May 16 00:17:03.076481 systemd[1]: sshd@28-10.0.0.135:22-10.0.0.1:34276.service: Deactivated successfully. May 16 00:17:03.078657 systemd[1]: session-29.scope: Deactivated successfully. May 16 00:17:03.079660 systemd-logind[1494]: Session 29 logged out. Waiting for processes to exit. May 16 00:17:03.080985 systemd-logind[1494]: Removed session 29.