May 14 23:59:14.931813 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:18:55 -00 2025 May 14 23:59:14.931839 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 14 23:59:14.931862 kernel: BIOS-provided physical RAM map: May 14 23:59:14.931872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 23:59:14.931885 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 23:59:14.931894 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 23:59:14.931901 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 23:59:14.931908 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 23:59:14.931914 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 23:59:14.931920 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 23:59:14.931930 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 23:59:14.931936 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 23:59:14.931942 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 23:59:14.931949 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 23:59:14.931957 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 23:59:14.931964 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 23:59:14.931973 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 23:59:14.931980 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 23:59:14.931987 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 23:59:14.931993 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 23:59:14.932000 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 23:59:14.932007 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 23:59:14.932014 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 23:59:14.932028 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:59:14.932035 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 23:59:14.932041 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:59:14.932049 kernel: NX (Execute Disable) protection: active May 14 23:59:14.932058 kernel: APIC: Static calls initialized May 14 23:59:14.932065 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 23:59:14.932072 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 23:59:14.932079 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 23:59:14.932085 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 23:59:14.932092 kernel: extended physical RAM map: May 14 23:59:14.932098 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 23:59:14.932105 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 23:59:14.932112 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 23:59:14.932119 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 23:59:14.932126 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 23:59:14.932135 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 23:59:14.932142 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 23:59:14.932153 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 14 23:59:14.932168 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 14 23:59:14.932176 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 14 23:59:14.932183 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 14 23:59:14.932190 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 14 23:59:14.932200 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 23:59:14.932207 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 23:59:14.932214 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 23:59:14.932222 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 23:59:14.932229 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 23:59:14.932236 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 23:59:14.932243 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 23:59:14.932250 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 23:59:14.932258 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 23:59:14.932267 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 23:59:14.932274 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 23:59:14.932281 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 23:59:14.932288 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:59:14.932296 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 23:59:14.932303 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:59:14.932310 kernel: efi: EFI v2.7 by EDK II May 14 23:59:14.932317 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 14 23:59:14.932324 kernel: random: crng init done May 14 23:59:14.932332 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 23:59:14.932339 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 23:59:14.932348 kernel: secureboot: Secure boot disabled May 14 23:59:14.932356 kernel: SMBIOS 2.8 present. May 14 23:59:14.932363 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 23:59:14.932370 kernel: Hypervisor detected: KVM May 14 23:59:14.932377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 23:59:14.932384 kernel: kvm-clock: using sched offset of 2874725047 cycles May 14 23:59:14.932392 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 23:59:14.932400 kernel: tsc: Detected 2794.746 MHz processor May 14 23:59:14.932407 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 23:59:14.932415 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 23:59:14.932422 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 23:59:14.932432 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 23:59:14.932439 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 23:59:14.932446 kernel: Using GB pages for direct mapping May 14 23:59:14.932453 kernel: ACPI: Early table checksum verification disabled May 14 23:59:14.932461 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 23:59:14.932468 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 23:59:14.932476 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932483 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932490 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 23:59:14.932501 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932508 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932515 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932523 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:59:14.932530 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 23:59:14.932537 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 23:59:14.932545 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 23:59:14.932552 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 23:59:14.932559 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 23:59:14.932569 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 23:59:14.932576 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 23:59:14.932584 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 23:59:14.932592 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 23:59:14.932601 kernel: No NUMA configuration found May 14 23:59:14.932609 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 23:59:14.932618 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 14 23:59:14.932625 kernel: Zone ranges: May 14 23:59:14.932632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 23:59:14.932642 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 23:59:14.932649 kernel: Normal empty May 14 23:59:14.932656 kernel: Movable zone start for each node May 14 23:59:14.932664 kernel: Early memory node ranges May 14 23:59:14.932671 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 23:59:14.932678 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 23:59:14.932685 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 23:59:14.932692 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 23:59:14.932700 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 23:59:14.932709 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 23:59:14.932716 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 14 23:59:14.932723 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 14 23:59:14.932730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 23:59:14.932738 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:59:14.932745 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 23:59:14.932771 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 23:59:14.932782 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:59:14.932789 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 23:59:14.932797 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 23:59:14.932804 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 23:59:14.932812 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 23:59:14.932822 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 23:59:14.932829 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 23:59:14.932837 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 23:59:14.932845 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 23:59:14.932852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 23:59:14.932862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 23:59:14.932870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 23:59:14.932877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 23:59:14.932885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 23:59:14.932893 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 23:59:14.932901 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 23:59:14.932908 kernel: TSC deadline timer available May 14 23:59:14.932916 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 23:59:14.932923 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 23:59:14.932933 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 23:59:14.932941 kernel: kvm-guest: setup PV sched yield May 14 23:59:14.932948 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 23:59:14.932956 kernel: Booting paravirtualized kernel on KVM May 14 23:59:14.932964 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 23:59:14.932971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 23:59:14.932979 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 23:59:14.932987 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 23:59:14.932994 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 23:59:14.933001 kernel: kvm-guest: PV spinlocks enabled May 14 23:59:14.933012 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 23:59:14.933026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 14 23:59:14.933035 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:59:14.933042 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:59:14.933050 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:59:14.933058 kernel: Fallback order for Node 0: 0 May 14 23:59:14.933066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 14 23:59:14.933073 kernel: Policy zone: DMA32 May 14 23:59:14.933083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:59:14.933091 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 175776K reserved, 0K cma-reserved) May 14 23:59:14.933099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:59:14.933107 kernel: ftrace: allocating 37946 entries in 149 pages May 14 23:59:14.933114 kernel: ftrace: allocated 149 pages with 4 groups May 14 23:59:14.933122 kernel: Dynamic Preempt: voluntary May 14 23:59:14.933129 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:59:14.933138 kernel: rcu: RCU event tracing is enabled. May 14 23:59:14.933145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:59:14.933155 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:59:14.933163 kernel: Rude variant of Tasks RCU enabled. May 14 23:59:14.933171 kernel: Tracing variant of Tasks RCU enabled. May 14 23:59:14.933178 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:59:14.933186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:59:14.933194 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 23:59:14.933201 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:59:14.933209 kernel: Console: colour dummy device 80x25 May 14 23:59:14.933216 kernel: printk: console [ttyS0] enabled May 14 23:59:14.933226 kernel: ACPI: Core revision 20230628 May 14 23:59:14.933234 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 23:59:14.933249 kernel: APIC: Switch to symmetric I/O mode setup May 14 23:59:14.933258 kernel: x2apic enabled May 14 23:59:14.933265 kernel: APIC: Switched APIC routing to: physical x2apic May 14 23:59:14.933273 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 23:59:14.933281 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 23:59:14.933288 kernel: kvm-guest: setup PV IPIs May 14 23:59:14.933296 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 23:59:14.933306 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 23:59:14.933314 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 23:59:14.933322 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 23:59:14.933329 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 23:59:14.933337 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 23:59:14.933345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 23:59:14.933352 kernel: Spectre V2 : Mitigation: Retpolines May 14 23:59:14.933360 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 23:59:14.933368 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 23:59:14.933378 kernel: RETBleed: Mitigation: untrained return thunk May 14 23:59:14.933385 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 23:59:14.933393 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 23:59:14.933401 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 23:59:14.933409 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 23:59:14.933417 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 23:59:14.933425 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 23:59:14.933432 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 23:59:14.933442 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 23:59:14.933450 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 23:59:14.933458 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 23:59:14.933465 kernel: Freeing SMP alternatives memory: 32K May 14 23:59:14.933473 kernel: pid_max: default: 32768 minimum: 301 May 14 23:59:14.933481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:59:14.933488 kernel: landlock: Up and running. May 14 23:59:14.933496 kernel: SELinux: Initializing. May 14 23:59:14.933504 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:59:14.933513 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:59:14.933521 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 23:59:14.933529 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:59:14.933537 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:59:14.933545 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:59:14.933552 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 23:59:14.933560 kernel: ... version: 0 May 14 23:59:14.933567 kernel: ... bit width: 48 May 14 23:59:14.933575 kernel: ... generic registers: 6 May 14 23:59:14.933585 kernel: ... value mask: 0000ffffffffffff May 14 23:59:14.933592 kernel: ... max period: 00007fffffffffff May 14 23:59:14.933600 kernel: ... fixed-purpose events: 0 May 14 23:59:14.933608 kernel: ... event mask: 000000000000003f May 14 23:59:14.933615 kernel: signal: max sigframe size: 1776 May 14 23:59:14.933623 kernel: rcu: Hierarchical SRCU implementation. May 14 23:59:14.933630 kernel: rcu: Max phase no-delay instances is 400. May 14 23:59:14.933638 kernel: smp: Bringing up secondary CPUs ... May 14 23:59:14.933646 kernel: smpboot: x86: Booting SMP configuration: May 14 23:59:14.933655 kernel: .... node #0, CPUs: #1 #2 #3 May 14 23:59:14.933663 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:59:14.933671 kernel: smpboot: Max logical packages: 1 May 14 23:59:14.933678 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 23:59:14.933686 kernel: devtmpfs: initialized May 14 23:59:14.933693 kernel: x86/mm: Memory block size: 128MB May 14 23:59:14.933701 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 23:59:14.933709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 23:59:14.933717 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 23:59:14.933727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 23:59:14.933734 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 14 23:59:14.933742 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 23:59:14.933750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:59:14.933768 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:59:14.933776 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:59:14.933783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:59:14.933791 kernel: audit: initializing netlink subsys (disabled) May 14 23:59:14.933799 kernel: audit: type=2000 audit(1747267154.247:1): state=initialized audit_enabled=0 res=1 May 14 23:59:14.933808 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:59:14.933816 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 23:59:14.933824 kernel: cpuidle: using governor menu May 14 23:59:14.933831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:59:14.933839 kernel: dca service started, version 1.12.1 May 14 23:59:14.933847 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 14 23:59:14.933854 kernel: PCI: Using configuration type 1 for base access May 14 23:59:14.933862 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 23:59:14.933870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:59:14.933880 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:59:14.933887 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:59:14.933895 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:59:14.933903 kernel: ACPI: Added _OSI(Module Device) May 14 23:59:14.933910 kernel: ACPI: Added _OSI(Processor Device) May 14 23:59:14.933918 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:59:14.933925 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:59:14.933933 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:59:14.933940 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 23:59:14.933950 kernel: ACPI: Interpreter enabled May 14 23:59:14.933958 kernel: ACPI: PM: (supports S0 S3 S5) May 14 23:59:14.933965 kernel: ACPI: Using IOAPIC for interrupt routing May 14 23:59:14.933973 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 23:59:14.933981 kernel: PCI: Using E820 reservations for host bridge windows May 14 23:59:14.933988 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 23:59:14.933996 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:59:14.934183 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:59:14.934319 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 23:59:14.934443 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 23:59:14.934453 kernel: PCI host bridge to bus 0000:00 May 14 23:59:14.934583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 23:59:14.934745 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 23:59:14.934948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 23:59:14.935073 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 23:59:14.935190 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 23:59:14.935303 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 23:59:14.935413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:59:14.935552 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 23:59:14.935682 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 23:59:14.935834 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 14 23:59:14.935973 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 14 23:59:14.936105 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 14 23:59:14.936225 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 14 23:59:14.936345 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 23:59:14.936476 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:59:14.936597 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 14 23:59:14.936717 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 14 23:59:14.936862 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 14 23:59:14.937076 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 23:59:14.937201 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 14 23:59:14.937322 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 14 23:59:14.937443 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 14 23:59:14.937572 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 23:59:14.937693 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 14 23:59:14.937833 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 14 23:59:14.937953 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 23:59:14.938081 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 14 23:59:14.938210 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 23:59:14.938331 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 23:59:14.938460 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 23:59:14.938582 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 14 23:59:14.938706 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 14 23:59:14.938877 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 23:59:14.938999 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 14 23:59:14.939010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 23:59:14.939017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 23:59:14.939033 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 23:59:14.939040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 23:59:14.939052 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 23:59:14.939060 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 23:59:14.939068 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 23:59:14.939076 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 23:59:14.939084 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 23:59:14.939091 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 23:59:14.939099 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 23:59:14.939107 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 23:59:14.939114 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 23:59:14.939124 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 23:59:14.939132 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 23:59:14.939140 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 23:59:14.939147 kernel: iommu: Default domain type: Translated May 14 23:59:14.939155 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 23:59:14.939163 kernel: efivars: Registered efivars operations May 14 23:59:14.939170 kernel: PCI: Using ACPI for IRQ routing May 14 23:59:14.939178 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 23:59:14.939185 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 23:59:14.939195 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 23:59:14.939203 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 14 23:59:14.939210 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 14 23:59:14.939218 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 23:59:14.939225 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 23:59:14.939233 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 14 23:59:14.939241 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 23:59:14.939364 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 23:59:14.939483 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 23:59:14.939615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 23:59:14.939625 kernel: vgaarb: loaded May 14 23:59:14.939632 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 23:59:14.939640 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 23:59:14.939648 kernel: clocksource: Switched to clocksource kvm-clock May 14 23:59:14.939656 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:59:14.939664 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:59:14.939671 kernel: pnp: PnP ACPI init May 14 23:59:14.939835 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 23:59:14.939851 kernel: pnp: PnP ACPI: found 6 devices May 14 23:59:14.939859 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 23:59:14.939866 kernel: NET: Registered PF_INET protocol family May 14 23:59:14.939874 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:59:14.939902 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:59:14.939912 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:59:14.939930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:59:14.939938 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:59:14.939966 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:59:14.939974 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:59:14.939982 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:59:14.939990 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:59:14.939998 kernel: NET: Registered PF_XDP protocol family May 14 23:59:14.940136 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 14 23:59:14.940259 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 14 23:59:14.940371 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 23:59:14.940487 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 23:59:14.940597 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 23:59:14.940707 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 23:59:14.940879 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 23:59:14.940990 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 23:59:14.941000 kernel: PCI: CLS 0 bytes, default 64 May 14 23:59:14.941009 kernel: Initialise system trusted keyrings May 14 23:59:14.941017 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:59:14.941038 kernel: Key type asymmetric registered May 14 23:59:14.941046 kernel: Asymmetric key parser 'x509' registered May 14 23:59:14.941054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 23:59:14.941063 kernel: io scheduler mq-deadline registered May 14 23:59:14.941070 kernel: io scheduler kyber registered May 14 23:59:14.941078 kernel: io scheduler bfq registered May 14 23:59:14.941087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 23:59:14.941095 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 23:59:14.941103 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 23:59:14.941114 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 23:59:14.941124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:59:14.941132 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 23:59:14.941140 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 23:59:14.941149 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 23:59:14.941157 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 23:59:14.941285 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 23:59:14.941297 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 23:59:14.941411 kernel: rtc_cmos 00:04: registered as rtc0 May 14 23:59:14.941526 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T23:59:14 UTC (1747267154) May 14 23:59:14.941638 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 23:59:14.941648 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 23:59:14.941656 kernel: efifb: probing for efifb May 14 23:59:14.941664 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 23:59:14.941676 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 23:59:14.941684 kernel: efifb: scrolling: redraw May 14 23:59:14.941693 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 23:59:14.941701 kernel: Console: switching to colour frame buffer device 160x50 May 14 23:59:14.941709 kernel: fb0: EFI VGA frame buffer device May 14 23:59:14.941717 kernel: pstore: Using crash dump compression: deflate May 14 23:59:14.941725 kernel: pstore: Registered efi_pstore as persistent store backend May 14 23:59:14.941733 kernel: NET: Registered PF_INET6 protocol family May 14 23:59:14.941741 kernel: Segment Routing with IPv6 May 14 23:59:14.941751 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:59:14.941845 kernel: NET: Registered PF_PACKET protocol family May 14 23:59:14.941854 kernel: Key type dns_resolver registered May 14 23:59:14.941862 kernel: IPI shorthand broadcast: enabled May 14 23:59:14.941870 kernel: sched_clock: Marking stable (639002992, 161090454)->(823090330, -22996884) May 14 23:59:14.941878 kernel: registered taskstats version 1 May 14 23:59:14.941886 kernel: Loading compiled-in X.509 certificates May 14 23:59:14.941894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 24318a9a7bb74dcc18d1d3d4ac63358025b8c253' May 14 23:59:14.941902 kernel: Key type .fscrypt registered May 14 23:59:14.941912 kernel: Key type fscrypt-provisioning registered May 14 23:59:14.941920 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:59:14.941928 kernel: ima: Allocated hash algorithm: sha1 May 14 23:59:14.941936 kernel: ima: No architecture policies found May 14 23:59:14.941944 kernel: clk: Disabling unused clocks May 14 23:59:14.941952 kernel: Freeing unused kernel image (initmem) memory: 43000K May 14 23:59:14.941960 kernel: Write protecting the kernel read-only data: 36864k May 14 23:59:14.941968 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 14 23:59:14.941976 kernel: Run /init as init process May 14 23:59:14.941987 kernel: with arguments: May 14 23:59:14.941995 kernel: /init May 14 23:59:14.942003 kernel: with environment: May 14 23:59:14.942010 kernel: HOME=/ May 14 23:59:14.942018 kernel: TERM=linux May 14 23:59:14.942033 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:59:14.942044 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 14 23:59:14.942053 systemd[1]: Detected virtualization kvm. May 14 23:59:14.942065 systemd[1]: Detected architecture x86-64. May 14 23:59:14.942073 systemd[1]: Running in initrd. May 14 23:59:14.942081 systemd[1]: No hostname configured, using default hostname. May 14 23:59:14.942089 systemd[1]: Hostname set to . May 14 23:59:14.942098 systemd[1]: Initializing machine ID from VM UUID. May 14 23:59:14.942107 systemd[1]: Queued start job for default target initrd.target. May 14 23:59:14.942115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:59:14.942124 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:59:14.942135 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:59:14.942144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:59:14.942152 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:59:14.942161 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:59:14.942171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:59:14.942180 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:59:14.942191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:59:14.942199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:59:14.942208 systemd[1]: Reached target paths.target - Path Units. May 14 23:59:14.942216 systemd[1]: Reached target slices.target - Slice Units. May 14 23:59:14.942224 systemd[1]: Reached target swap.target - Swaps. May 14 23:59:14.942233 systemd[1]: Reached target timers.target - Timer Units. May 14 23:59:14.942241 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:59:14.942249 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:59:14.942258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:59:14.942269 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 14 23:59:14.942277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:59:14.942286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:59:14.942294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:59:14.942303 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:59:14.942311 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:59:14.942320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:59:14.942328 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:59:14.942339 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:59:14.942347 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:59:14.942356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:59:14.942364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:59:14.942373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:59:14.942381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:59:14.942390 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:59:14.942420 systemd-journald[193]: Collecting audit messages is disabled. May 14 23:59:14.942439 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:59:14.942451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:14.942460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:59:14.942468 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:59:14.942477 systemd-journald[193]: Journal started May 14 23:59:14.942495 systemd-journald[193]: Runtime Journal (/run/log/journal/e538a3dc643f42b9a972b05e1dae97ca) is 6.0M, max 48.3M, 42.2M free. May 14 23:59:14.933281 systemd-modules-load[194]: Inserted module 'overlay' May 14 23:59:14.944797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:59:14.950675 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:59:14.950983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:59:14.955945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:59:14.959113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:59:14.960440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:59:14.973212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:59:14.977013 dracut-cmdline[221]: dracut-dracut-053 May 14 23:59:14.980141 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 14 23:59:14.988778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:59:14.991101 systemd-modules-load[194]: Inserted module 'br_netfilter' May 14 23:59:14.992044 kernel: Bridge firewalling registered May 14 23:59:14.993131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:59:15.003892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:59:15.014325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:59:15.024931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:59:15.054238 systemd-resolved[275]: Positive Trust Anchors: May 14 23:59:15.054255 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:59:15.054286 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:59:15.065401 systemd-resolved[275]: Defaulting to hostname 'linux'. May 14 23:59:15.067457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:59:15.068266 kernel: SCSI subsystem initialized May 14 23:59:15.068118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:59:15.077777 kernel: Loading iSCSI transport class v2.0-870. May 14 23:59:15.092789 kernel: iscsi: registered transport (tcp) May 14 23:59:15.113845 kernel: iscsi: registered transport (qla4xxx) May 14 23:59:15.113887 kernel: QLogic iSCSI HBA Driver May 14 23:59:15.165639 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:59:15.178946 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:59:15.204013 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:59:15.204051 kernel: device-mapper: uevent: version 1.0.3 May 14 23:59:15.205081 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:59:15.248794 kernel: raid6: avx2x4 gen() 27840 MB/s May 14 23:59:15.265786 kernel: raid6: avx2x2 gen() 20784 MB/s May 14 23:59:15.282888 kernel: raid6: avx2x1 gen() 24902 MB/s May 14 23:59:15.282911 kernel: raid6: using algorithm avx2x4 gen() 27840 MB/s May 14 23:59:15.300866 kernel: raid6: .... xor() 7426 MB/s, rmw enabled May 14 23:59:15.300887 kernel: raid6: using avx2x2 recovery algorithm May 14 23:59:15.325812 kernel: xor: automatically using best checksumming function avx May 14 23:59:15.482810 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:59:15.495140 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:59:15.512892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:59:15.539817 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 14 23:59:15.544616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:59:15.556895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:59:15.573527 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 14 23:59:15.607497 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:59:15.613888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:59:15.681471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:59:15.694274 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:59:15.706173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:59:15.712991 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:59:15.715946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:59:15.719856 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:59:15.725785 kernel: cryptd: max_cpu_qlen set to 1000 May 14 23:59:15.728965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:59:15.735795 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 23:59:15.746905 kernel: AVX2 version of gcm_enc/dec engaged. May 14 23:59:15.746941 kernel: AES CTR mode by8 optimization enabled May 14 23:59:15.747863 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:59:15.754788 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:59:15.757776 kernel: libata version 3.00 loaded. May 14 23:59:15.758073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:59:15.758206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:59:15.763754 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:59:15.763794 kernel: GPT:9289727 != 19775487 May 14 23:59:15.763805 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:59:15.763815 kernel: GPT:9289727 != 19775487 May 14 23:59:15.764273 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:59:15.766251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:59:15.766182 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:59:15.767480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:59:15.769560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:15.774906 kernel: ahci 0000:00:1f.2: version 3.0 May 14 23:59:15.775107 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 23:59:15.771549 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:59:15.778159 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 23:59:15.778325 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 23:59:15.781045 kernel: scsi host0: ahci May 14 23:59:15.781463 kernel: scsi host1: ahci May 14 23:59:15.784137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:59:15.790598 kernel: scsi host2: ahci May 14 23:59:15.800796 kernel: BTRFS: device fsid 588f8840-d63c-4068-b03d-1642b4e6460f devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (467) May 14 23:59:15.803839 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) May 14 23:59:15.804989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:15.805915 kernel: scsi host3: ahci May 14 23:59:15.806818 kernel: scsi host4: ahci May 14 23:59:15.811825 kernel: scsi host5: ahci May 14 23:59:15.812018 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 14 23:59:15.812031 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 14 23:59:15.812048 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 14 23:59:15.813279 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 14 23:59:15.813305 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 14 23:59:15.814953 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 14 23:59:15.817095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:59:15.832252 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:59:15.836473 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:59:15.837104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:59:15.841942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:59:15.854902 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:59:15.855986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:59:15.873611 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:59:15.954353 disk-uuid[569]: Primary Header is updated. May 14 23:59:15.954353 disk-uuid[569]: Secondary Entries is updated. May 14 23:59:15.954353 disk-uuid[569]: Secondary Header is updated. May 14 23:59:15.957930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:59:15.961774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:59:16.126779 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 23:59:16.126843 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 23:59:16.129784 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 23:59:16.129812 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 23:59:16.130516 kernel: ata3.00: applying bridge limits May 14 23:59:16.131969 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 23:59:16.132783 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 23:59:16.133783 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 23:59:16.133803 kernel: ata3.00: configured for UDMA/100 May 14 23:59:16.134797 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:59:16.183777 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 23:59:16.183990 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:59:16.203790 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 23:59:16.963378 disk-uuid[578]: The operation has completed successfully. May 14 23:59:16.964610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:59:16.995615 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:59:16.995749 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:59:17.017066 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:59:17.020944 sh[595]: Success May 14 23:59:17.034805 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 23:59:17.068687 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:59:17.081208 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:59:17.083899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:59:17.095104 kernel: BTRFS info (device dm-0): first mount of filesystem 588f8840-d63c-4068-b03d-1642b4e6460f May 14 23:59:17.095133 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 23:59:17.095144 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:59:17.097024 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:59:17.097039 kernel: BTRFS info (device dm-0): using free space tree May 14 23:59:17.101874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:59:17.104272 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:59:17.114922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:59:17.116626 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:59:17.125551 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 14 23:59:17.125613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:59:17.125624 kernel: BTRFS info (device vda6): using free space tree May 14 23:59:17.128789 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:59:17.137524 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 23:59:17.139014 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 14 23:59:17.148863 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:59:17.156925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:59:17.209419 ignition[691]: Ignition 2.20.0 May 14 23:59:17.209431 ignition[691]: Stage: fetch-offline May 14 23:59:17.209477 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 14 23:59:17.209486 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:17.209581 ignition[691]: parsed url from cmdline: "" May 14 23:59:17.209585 ignition[691]: no config URL provided May 14 23:59:17.209590 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:59:17.209599 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 14 23:59:17.209634 ignition[691]: op(1): [started] loading QEMU firmware config module May 14 23:59:17.209640 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:59:17.220264 ignition[691]: op(1): [finished] loading QEMU firmware config module May 14 23:59:17.232027 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:59:17.237567 ignition[691]: parsing config with SHA512: a20ecf7d2a01ab24f1d484ece45b35a4b91d0309d02abe3b387aff8543a009dd69a648b56e39bb4f1f676dee486cec5e8d2427415e382096b98f419839b7f988 May 14 23:59:17.238955 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:59:17.245450 unknown[691]: fetched base config from "system" May 14 23:59:17.245466 unknown[691]: fetched user config from "qemu" May 14 23:59:17.245986 ignition[691]: fetch-offline: fetch-offline passed May 14 23:59:17.246084 ignition[691]: Ignition finished successfully May 14 23:59:17.249338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:59:17.261970 systemd-networkd[783]: lo: Link UP May 14 23:59:17.261984 systemd-networkd[783]: lo: Gained carrier May 14 23:59:17.263442 systemd-networkd[783]: Enumeration completed May 14 23:59:17.263533 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:59:17.263891 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:59:17.263894 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:59:17.264783 systemd-networkd[783]: eth0: Link UP May 14 23:59:17.264787 systemd-networkd[783]: eth0: Gained carrier May 14 23:59:17.264793 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:59:17.265307 systemd[1]: Reached target network.target - Network. May 14 23:59:17.267199 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:59:17.275896 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:59:17.280814 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:59:17.288679 ignition[787]: Ignition 2.20.0 May 14 23:59:17.288689 ignition[787]: Stage: kargs May 14 23:59:17.288855 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 14 23:59:17.288865 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:17.289637 ignition[787]: kargs: kargs passed May 14 23:59:17.289673 ignition[787]: Ignition finished successfully May 14 23:59:17.296151 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:59:17.307910 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:59:17.320581 ignition[797]: Ignition 2.20.0 May 14 23:59:17.320591 ignition[797]: Stage: disks May 14 23:59:17.320744 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 14 23:59:17.320755 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:17.321547 ignition[797]: disks: disks passed May 14 23:59:17.321585 ignition[797]: Ignition finished successfully May 14 23:59:17.327082 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:59:17.329187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:59:17.329582 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:59:17.330109 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:59:17.330439 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:59:17.330777 systemd[1]: Reached target basic.target - Basic System. May 14 23:59:17.346996 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:59:17.371828 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:59:17.379126 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:59:17.400854 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:59:17.486786 kernel: EXT4-fs (vda9): mounted filesystem f97506c4-898a-43e3-9925-b47c40fa47d6 r/w with ordered data mode. Quota mode: none. May 14 23:59:17.487390 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:59:17.489614 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:59:17.505869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:59:17.507597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:59:17.508802 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:59:17.508846 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:59:17.520066 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) May 14 23:59:17.520086 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 14 23:59:17.520098 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:59:17.520108 kernel: BTRFS info (device vda6): using free space tree May 14 23:59:17.508866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:59:17.523092 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:59:17.515655 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:59:17.521056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:59:17.524500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:59:17.557003 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:59:17.561158 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory May 14 23:59:17.564672 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:59:17.568377 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:59:17.645997 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:59:17.654931 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:59:17.656683 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:59:17.663785 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 14 23:59:17.679427 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:59:17.683848 ignition[933]: INFO : Ignition 2.20.0 May 14 23:59:17.683848 ignition[933]: INFO : Stage: mount May 14 23:59:17.685514 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:59:17.685514 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:17.685514 ignition[933]: INFO : mount: mount passed May 14 23:59:17.685514 ignition[933]: INFO : Ignition finished successfully May 14 23:59:17.690697 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:59:17.713866 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:59:18.094423 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:59:18.105910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:59:18.111779 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) May 14 23:59:18.113796 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 14 23:59:18.113817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:59:18.113828 kernel: BTRFS info (device vda6): using free space tree May 14 23:59:18.116778 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:59:18.118024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:59:18.144864 ignition[963]: INFO : Ignition 2.20.0 May 14 23:59:18.144864 ignition[963]: INFO : Stage: files May 14 23:59:18.146599 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:59:18.146599 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:18.146599 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 14 23:59:18.149976 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:59:18.149976 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:59:18.153323 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:59:18.154882 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:59:18.156400 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:59:18.155281 unknown[963]: wrote ssh authorized keys file for user: core May 14 23:59:18.158946 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 23:59:18.158946 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 23:59:18.229212 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:59:18.446430 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 23:59:18.446430 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 23:59:18.450740 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 23:59:18.825973 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:59:19.213951 systemd-networkd[783]: eth0: Gained IPv6LL May 14 23:59:19.215608 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 23:59:19.215608 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:59:19.215608 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:59:19.215608 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 23:59:19.224921 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:59:19.291626 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:59:19.302924 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:59:19.305306 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:59:19.305306 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 23:59:19.305306 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:59:19.305306 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:59:19.305306 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:59:19.305306 ignition[963]: INFO : files: files passed May 14 23:59:19.305306 ignition[963]: INFO : Ignition finished successfully May 14 23:59:19.317884 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:59:19.339248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:59:19.351143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:59:19.353069 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:59:19.353208 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:59:19.373647 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:59:19.385272 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:59:19.385272 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:59:19.392746 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:59:19.389816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:59:19.403193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:59:19.421453 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:59:19.490665 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:59:19.493477 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:59:19.499573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:59:19.503805 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:59:19.505269 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:59:19.513122 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:59:19.569864 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:59:19.593236 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:59:19.630303 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:59:19.633973 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:59:19.638061 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:59:19.643149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:59:19.643333 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:59:19.649301 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:59:19.657260 systemd[1]: Stopped target basic.target - Basic System. May 14 23:59:19.661754 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:59:19.666626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:59:19.669888 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:59:19.672428 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:59:19.677170 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:59:19.685979 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:59:19.691524 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:59:19.701106 systemd[1]: Stopped target swap.target - Swaps. May 14 23:59:19.715202 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:59:19.715393 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:59:19.726984 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:59:19.732048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:59:19.732409 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:59:19.735186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:59:19.744052 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:59:19.744235 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:59:19.747679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:59:19.747854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:59:19.753728 systemd[1]: Stopped target paths.target - Path Units. May 14 23:59:19.754070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:59:19.766505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:59:19.768447 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:59:19.773205 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:59:19.783936 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:59:19.784072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:59:19.786135 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:59:19.786240 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:59:19.788352 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:59:19.788495 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:59:19.797067 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:59:19.798152 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:59:19.820044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:59:19.824485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:59:19.824672 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:59:19.851125 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:59:19.856877 ignition[1018]: INFO : Ignition 2.20.0 May 14 23:59:19.856877 ignition[1018]: INFO : Stage: umount May 14 23:59:19.856877 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:59:19.856877 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:59:19.883066 ignition[1018]: INFO : umount: umount passed May 14 23:59:19.883066 ignition[1018]: INFO : Ignition finished successfully May 14 23:59:19.858096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:59:19.858319 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:59:19.859884 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:59:19.860025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:59:19.873068 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:59:19.873235 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:59:19.877843 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:59:19.877993 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:59:19.881733 systemd[1]: Stopped target network.target - Network. May 14 23:59:19.883018 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:59:19.883091 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:59:19.886278 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:59:19.886327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:59:19.890599 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:59:19.890697 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:59:19.892401 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:59:19.892468 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:59:19.893265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:59:19.898632 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:59:19.902650 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:59:19.911778 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:59:19.911942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:59:19.913331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:59:19.913401 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:59:19.920892 systemd-networkd[783]: eth0: DHCPv6 lease lost May 14 23:59:19.924743 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:59:19.926183 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:59:19.931416 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:59:19.931510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:59:19.943014 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:59:19.944046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:59:19.944140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:59:19.947094 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:59:19.947206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:59:19.947814 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:59:19.947875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:59:19.952420 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:59:19.976748 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:59:19.976973 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:59:19.978443 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:59:19.978641 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:59:19.981698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:59:19.981867 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:59:19.983279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:59:19.983332 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:59:19.985618 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:59:19.985677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:59:19.990462 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:59:19.990529 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:59:19.995851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:59:19.995941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:59:20.011095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:59:20.011402 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:59:20.011490 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:59:20.015401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:59:20.015453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:20.028954 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:59:20.029103 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:59:20.276443 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:59:20.276573 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:59:20.311724 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:59:20.313620 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:59:20.313689 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:59:20.324069 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:59:20.333466 systemd[1]: Switching root. May 14 23:59:20.388860 systemd-journald[193]: Journal stopped May 14 23:59:22.282378 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 14 23:59:22.282448 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:59:22.282465 kernel: SELinux: policy capability open_perms=1 May 14 23:59:22.282480 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:59:22.282495 kernel: SELinux: policy capability always_check_network=0 May 14 23:59:22.282507 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:59:22.282518 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:59:22.282533 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:59:22.282546 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:59:22.282558 kernel: audit: type=1403 audit(1747267161.379:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:59:22.282570 systemd[1]: Successfully loaded SELinux policy in 43.726ms. May 14 23:59:22.282601 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.257ms. May 14 23:59:22.282614 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 14 23:59:22.282626 systemd[1]: Detected virtualization kvm. May 14 23:59:22.282638 systemd[1]: Detected architecture x86-64. May 14 23:59:22.282650 systemd[1]: Detected first boot. May 14 23:59:22.282668 systemd[1]: Initializing machine ID from VM UUID. May 14 23:59:22.282680 zram_generator::config[1062]: No configuration found. May 14 23:59:22.282693 systemd[1]: Populated /etc with preset unit settings. May 14 23:59:22.282704 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:59:22.282716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:59:22.282728 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:59:22.282741 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:59:22.282770 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:59:22.282787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:59:22.282799 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:59:22.282817 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:59:22.282831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:59:22.282848 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:59:22.282866 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:59:22.282898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:59:22.282915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:59:22.282931 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:59:22.282950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:59:22.282971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:59:22.282987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:59:22.283003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:59:22.283017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:59:22.283029 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:59:22.283041 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:59:22.283059 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:59:22.285532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:59:22.285548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:59:22.285560 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:59:22.285573 systemd[1]: Reached target slices.target - Slice Units. May 14 23:59:22.285587 systemd[1]: Reached target swap.target - Swaps. May 14 23:59:22.285604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:59:22.285620 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:59:22.285636 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:59:22.285651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:59:22.285667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:59:22.285680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:59:22.285693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:59:22.285705 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:59:22.285717 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:59:22.285730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:22.285742 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:59:22.285754 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:59:22.285780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:59:22.285799 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:59:22.285811 systemd[1]: Reached target machines.target - Containers. May 14 23:59:22.285825 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:59:22.285838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:59:22.285850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:59:22.285862 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:59:22.285874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:59:22.285898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:59:22.285913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:59:22.285925 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:59:22.285938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:59:22.285954 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:59:22.285967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:59:22.285979 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:59:22.285991 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:59:22.286002 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:59:22.286016 kernel: fuse: init (API version 7.39) May 14 23:59:22.286028 kernel: ACPI: bus type drm_connector registered May 14 23:59:22.286040 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:59:22.286052 kernel: loop: module loaded May 14 23:59:22.286064 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:59:22.286076 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:59:22.286089 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:59:22.286124 systemd-journald[1125]: Collecting audit messages is disabled. May 14 23:59:22.286153 systemd-journald[1125]: Journal started May 14 23:59:22.286178 systemd-journald[1125]: Runtime Journal (/run/log/journal/e538a3dc643f42b9a972b05e1dae97ca) is 6.0M, max 48.3M, 42.2M free. May 14 23:59:22.287425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:59:22.017674 systemd[1]: Queued start job for default target multi-user.target. May 14 23:59:22.037839 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:59:22.038344 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:59:22.291991 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:59:22.292035 systemd[1]: Stopped verity-setup.service. May 14 23:59:22.294783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:22.297788 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:59:22.299064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:59:22.300324 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:59:22.301653 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:59:22.302806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:59:22.304040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:59:22.305356 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:59:22.306656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:59:22.308488 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:59:22.308682 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:59:22.310419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:59:22.310619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:59:22.312186 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:59:22.312375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:59:22.313931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:59:22.314131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:59:22.315742 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:59:22.315977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:59:22.317400 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:59:22.317592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:59:22.319065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:59:22.320532 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:59:22.322149 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:59:22.338553 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:59:22.353905 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:59:22.362593 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:59:22.363854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:59:22.363899 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:59:22.366160 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 14 23:59:22.368710 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:59:22.371946 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:59:22.373417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:59:22.376062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:59:22.378647 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:59:22.379938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:59:22.382322 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:59:22.383886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:59:22.386906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:59:22.388932 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:59:22.391890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:59:22.393653 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:59:22.395260 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:59:22.399085 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:59:22.405804 kernel: loop0: detected capacity change from 0 to 205544 May 14 23:59:22.405962 systemd-journald[1125]: Time spent on flushing to /var/log/journal/e538a3dc643f42b9a972b05e1dae97ca is 19.161ms for 1040 entries. May 14 23:59:22.405962 systemd-journald[1125]: System Journal (/var/log/journal/e538a3dc643f42b9a972b05e1dae97ca) is 8.0M, max 195.6M, 187.6M free. May 14 23:59:22.563504 systemd-journald[1125]: Received client request to flush runtime journal. May 14 23:59:22.563587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:59:22.563610 kernel: loop1: detected capacity change from 0 to 140992 May 14 23:59:22.563629 kernel: loop2: detected capacity change from 0 to 138184 May 14 23:59:22.405990 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:59:22.431166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:59:22.441565 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:59:22.542420 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:59:22.548376 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:59:22.559066 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 14 23:59:22.561091 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:59:22.566806 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:59:22.581985 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:59:22.614799 kernel: loop3: detected capacity change from 0 to 205544 May 14 23:59:22.629809 kernel: loop4: detected capacity change from 0 to 140992 May 14 23:59:22.640002 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:59:22.686965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:59:22.741795 kernel: loop5: detected capacity change from 0 to 138184 May 14 23:59:22.748123 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:59:22.748733 (sd-merge)[1195]: Merged extensions into '/usr'. May 14 23:59:22.753937 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:59:22.753955 systemd[1]: Reloading... May 14 23:59:22.757363 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 14 23:59:22.757382 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 14 23:59:22.808807 zram_generator::config[1225]: No configuration found. May 14 23:59:22.945385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:59:22.996917 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:59:23.000894 systemd[1]: Reloading finished in 246 ms. May 14 23:59:23.033992 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:59:23.035634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:59:23.037404 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:59:23.039295 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 14 23:59:23.041006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:59:23.055970 systemd[1]: Starting ensure-sysext.service... May 14 23:59:23.058036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:59:23.064204 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... May 14 23:59:23.064221 systemd[1]: Reloading... May 14 23:59:23.080716 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:59:23.081057 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:59:23.082044 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:59:23.082337 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 23:59:23.082419 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 14 23:59:23.085968 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:59:23.085981 systemd-tmpfiles[1267]: Skipping /boot May 14 23:59:23.108899 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:59:23.108919 systemd-tmpfiles[1267]: Skipping /boot May 14 23:59:23.109784 zram_generator::config[1293]: No configuration found. May 14 23:59:23.217656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:59:23.272515 systemd[1]: Reloading finished in 207 ms. May 14 23:59:23.304668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:59:23.314320 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:59:23.408243 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:59:23.412151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:59:23.415716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:59:23.418261 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:59:23.421596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.421822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:59:23.423546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:59:23.426096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:59:23.432031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:59:23.433952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:59:23.434083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.437667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:59:23.438125 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:59:23.441256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:59:23.441446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:59:23.445605 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:59:23.448086 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:59:23.448275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:59:23.455333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.455653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:59:23.463011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:59:23.466189 augenrules[1364]: No rules May 14 23:59:23.467404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:59:23.469758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:59:23.471051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:59:23.473512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:59:23.474849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.476925 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:59:23.477745 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:59:23.479712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:59:23.482118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:59:23.482337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:59:23.484475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:59:23.484702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:59:23.486836 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:59:23.487077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:59:23.497255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.505163 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:59:23.506653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:59:23.509973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:59:23.525285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:59:23.527920 augenrules[1377]: /sbin/augenrules: No change May 14 23:59:23.532445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:59:23.540140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:59:23.542025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:59:23.542214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:59:23.544915 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:59:23.546996 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:59:23.548889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:59:23.550982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:59:23.551209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:59:23.553286 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:59:23.553554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:59:23.554959 augenrules[1403]: No rules May 14 23:59:23.555635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:59:23.556080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:59:23.558136 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:59:23.558379 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:59:23.560198 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:59:23.560420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:59:23.565639 systemd[1]: Finished ensure-sysext.service. May 14 23:59:23.571619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:59:23.571700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:59:23.579923 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:59:23.582688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:59:23.585268 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:59:23.586461 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:59:23.600944 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:59:23.611605 systemd-resolved[1341]: Positive Trust Anchors: May 14 23:59:23.611619 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:59:23.611650 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:59:23.615206 systemd-udevd[1417]: Using default interface naming scheme 'v255'. May 14 23:59:23.615649 systemd-resolved[1341]: Defaulting to hostname 'linux'. May 14 23:59:23.617378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:59:23.619020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:59:23.635897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:59:23.671947 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:59:23.686835 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1425) May 14 23:59:23.690609 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:59:23.694233 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:59:23.696620 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:59:23.747393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:59:23.749781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 14 23:59:23.761466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:59:23.766777 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 23:59:23.791906 kernel: ACPI: button: Power Button [PWRF] May 14 23:59:23.800006 systemd-networkd[1439]: lo: Link UP May 14 23:59:23.800015 systemd-networkd[1439]: lo: Gained carrier May 14 23:59:23.801542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:59:23.802777 systemd-networkd[1439]: Enumeration completed May 14 23:59:23.803412 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:59:23.804558 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:59:23.804565 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:59:23.805518 systemd-networkd[1439]: eth0: Link UP May 14 23:59:23.805574 systemd-networkd[1439]: eth0: Gained carrier May 14 23:59:23.805620 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:59:23.808331 systemd[1]: Reached target network.target - Network. May 14 23:59:23.830862 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:59:23.830978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:59:23.831783 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:59:23.832357 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. May 14 23:59:24.249161 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:59:24.249202 systemd-timesyncd[1416]: Initial clock synchronization to Wed 2025-05-14 23:59:24.249077 UTC. May 14 23:59:24.249393 systemd-resolved[1341]: Clock change detected. Flushing caches. May 14 23:59:24.250822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:59:24.305793 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 23:59:24.306055 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 23:59:24.306210 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 23:59:24.307526 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 23:59:24.308456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:59:24.308689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:24.319956 kernel: kvm_amd: TSC scaling supported May 14 23:59:24.319997 kernel: kvm_amd: Nested Virtualization enabled May 14 23:59:24.320011 kernel: kvm_amd: Nested Paging enabled May 14 23:59:24.320782 kernel: kvm_amd: LBR virtualization supported May 14 23:59:24.320806 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 23:59:24.320818 kernel: kvm_amd: Virtual GIF supported May 14 23:59:24.334983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:59:24.339795 kernel: EDAC MC: Ver: 3.0.0 May 14 23:59:24.375039 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:59:24.404962 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:59:24.406873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:59:24.415086 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:59:24.459824 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:59:24.461677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:59:24.463085 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:59:24.464543 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:59:24.466325 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:59:24.468089 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:59:24.477787 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:59:24.479404 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:59:24.481077 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:59:24.481105 systemd[1]: Reached target paths.target - Path Units. May 14 23:59:24.482244 systemd[1]: Reached target timers.target - Timer Units. May 14 23:59:24.484230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:59:24.487215 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:59:24.499111 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:59:24.511558 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:59:24.513326 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:59:24.514615 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:59:24.515705 systemd[1]: Reached target basic.target - Basic System. May 14 23:59:24.516795 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:59:24.516821 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:59:24.517693 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:59:24.519868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:59:24.524099 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:59:24.528005 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:59:24.529908 jq[1475]: false May 14 23:59:24.530415 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:59:24.531621 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:59:24.534037 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:59:24.538929 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:59:24.544104 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:59:24.544112 dbus-daemon[1474]: [system] SELinux support is enabled May 14 23:59:24.549699 extend-filesystems[1476]: Found loop3 May 14 23:59:24.549699 extend-filesystems[1476]: Found loop4 May 14 23:59:24.549699 extend-filesystems[1476]: Found loop5 May 14 23:59:24.549699 extend-filesystems[1476]: Found sr0 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda May 14 23:59:24.549699 extend-filesystems[1476]: Found vda1 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda2 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda3 May 14 23:59:24.549699 extend-filesystems[1476]: Found usr May 14 23:59:24.549699 extend-filesystems[1476]: Found vda4 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda6 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda7 May 14 23:59:24.549699 extend-filesystems[1476]: Found vda9 May 14 23:59:24.549699 extend-filesystems[1476]: Checking size of /dev/vda9 May 14 23:59:24.565874 extend-filesystems[1476]: Resized partition /dev/vda9 May 14 23:59:24.567840 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) May 14 23:59:24.567154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:59:24.576916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1430) May 14 23:59:24.580429 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:59:24.582538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:59:24.583250 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:59:24.585085 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:59:24.587954 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:59:24.595145 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:59:24.602540 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:59:24.601123 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:59:24.605292 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:59:24.605562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:59:24.605994 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:59:24.606246 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:59:24.607325 jq[1496]: true May 14 23:59:24.610335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:59:24.610602 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:59:24.639090 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:59:24.644499 jq[1500]: true May 14 23:59:24.651607 update_engine[1494]: I20250514 23:59:24.651304 1494 main.cc:92] Flatcar Update Engine starting May 14 23:59:24.652613 update_engine[1494]: I20250514 23:59:24.652575 1494 update_check_scheduler.cc:74] Next update check in 2m40s May 14 23:59:24.656088 systemd[1]: Started update-engine.service - Update Engine. May 14 23:59:24.691135 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:59:24.691205 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:59:24.693347 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:59:24.693372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:59:24.702062 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:59:24.705376 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) May 14 23:59:24.705408 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 23:59:24.708133 systemd-logind[1492]: New seat seat0. May 14 23:59:24.710992 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:59:24.724538 tar[1499]: linux-amd64/helm May 14 23:59:24.766803 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:59:24.802372 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:59:25.045554 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:59:25.045554 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:59:25.045554 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:59:25.049590 extend-filesystems[1476]: Resized filesystem in /dev/vda9 May 14 23:59:25.053899 containerd[1501]: time="2025-05-14T23:59:25.047483574Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:59:25.054247 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:59:25.047357 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:59:25.048121 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:59:25.071974 containerd[1501]: time="2025-05-14T23:59:25.071875374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.074111 containerd[1501]: time="2025-05-14T23:59:25.074073238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:59:25.074111 containerd[1501]: time="2025-05-14T23:59:25.074100509Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:59:25.074111 containerd[1501]: time="2025-05-14T23:59:25.074117191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:59:25.074340 containerd[1501]: time="2025-05-14T23:59:25.074320262Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:59:25.074373 containerd[1501]: time="2025-05-14T23:59:25.074342393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.074470 containerd[1501]: time="2025-05-14T23:59:25.074416041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:59:25.074470 containerd[1501]: time="2025-05-14T23:59:25.074433634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.074651 containerd[1501]: time="2025-05-14T23:59:25.074630133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:59:25.074673 containerd[1501]: time="2025-05-14T23:59:25.074648968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.074697 containerd[1501]: time="2025-05-14T23:59:25.074665680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:59:25.074697 containerd[1501]: time="2025-05-14T23:59:25.074689324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.074845 containerd[1501]: time="2025-05-14T23:59:25.074805492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.075099 containerd[1501]: time="2025-05-14T23:59:25.075078364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:59:25.075228 containerd[1501]: time="2025-05-14T23:59:25.075206595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:59:25.075228 containerd[1501]: time="2025-05-14T23:59:25.075226242Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:59:25.075343 containerd[1501]: time="2025-05-14T23:59:25.075324726Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:59:25.075402 containerd[1501]: time="2025-05-14T23:59:25.075386452Z" level=info msg="metadata content store policy set" policy=shared May 14 23:59:25.080458 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:59:25.099266 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:59:25.110712 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:59:25.111083 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:59:25.120012 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:59:25.134516 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:59:25.143256 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:59:25.145734 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:59:25.147440 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:59:25.166164 bash[1526]: Updated "/home/core/.ssh/authorized_keys" May 14 23:59:25.168151 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:59:25.170332 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:59:25.202116 containerd[1501]: time="2025-05-14T23:59:25.202052883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:59:25.202242 containerd[1501]: time="2025-05-14T23:59:25.202148552Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:59:25.202242 containerd[1501]: time="2025-05-14T23:59:25.202187666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:59:25.202242 containerd[1501]: time="2025-05-14T23:59:25.202211370Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:59:25.202242 containerd[1501]: time="2025-05-14T23:59:25.202231859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:59:25.202480 containerd[1501]: time="2025-05-14T23:59:25.202455548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:59:25.202775 containerd[1501]: time="2025-05-14T23:59:25.202742817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:59:25.202932 containerd[1501]: time="2025-05-14T23:59:25.202902938Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:59:25.202932 containerd[1501]: time="2025-05-14T23:59:25.202928245Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:59:25.203000 containerd[1501]: time="2025-05-14T23:59:25.202945618Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:59:25.203000 containerd[1501]: time="2025-05-14T23:59:25.202961728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203000 containerd[1501]: time="2025-05-14T23:59:25.202978900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203000 containerd[1501]: time="2025-05-14T23:59:25.202993768Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203088 containerd[1501]: time="2025-05-14T23:59:25.203011381Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203088 containerd[1501]: time="2025-05-14T23:59:25.203028924Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203088 containerd[1501]: time="2025-05-14T23:59:25.203043702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203088 containerd[1501]: time="2025-05-14T23:59:25.203058720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203088 containerd[1501]: time="2025-05-14T23:59:25.203073618Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203098695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203115697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203130655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203146014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203161333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203179186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203203 containerd[1501]: time="2025-05-14T23:59:25.203193403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203209363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203225984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203244408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203260318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203276298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203294102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203312827Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203336411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203352982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203373 containerd[1501]: time="2025-05-14T23:59:25.203366548Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203430538Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203453431Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203467667Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203483507Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203495770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203511630Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203524634Z" level=info msg="NRI interface is disabled by configuration." May 14 23:59:25.203593 containerd[1501]: time="2025-05-14T23:59:25.203537889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:59:25.204001 containerd[1501]: time="2025-05-14T23:59:25.203931738Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:59:25.204001 containerd[1501]: time="2025-05-14T23:59:25.203993885Z" level=info msg="Connect containerd service" May 14 23:59:25.204186 containerd[1501]: time="2025-05-14T23:59:25.204042105Z" level=info msg="using legacy CRI server" May 14 23:59:25.204186 containerd[1501]: time="2025-05-14T23:59:25.204051062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:59:25.204186 containerd[1501]: time="2025-05-14T23:59:25.204166028Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:59:25.205091 containerd[1501]: time="2025-05-14T23:59:25.205058923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205205358Z" level=info msg="Start subscribing containerd event" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205262495Z" level=info msg="Start recovering state" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205334490Z" level=info msg="Start event monitor" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205360399Z" level=info msg="Start snapshots syncer" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205371530Z" level=info msg="Start cni network conf syncer for default" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205381599Z" level=info msg="Start streaming server" May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205437864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.205491174Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:59:25.206150 containerd[1501]: time="2025-05-14T23:59:25.206121958Z" level=info msg="containerd successfully booted in 0.203443s" May 14 23:59:25.205636 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:59:25.408192 tar[1499]: linux-amd64/LICENSE May 14 23:59:25.408389 tar[1499]: linux-amd64/README.md May 14 23:59:25.423702 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:59:25.710024 systemd-networkd[1439]: eth0: Gained IPv6LL May 14 23:59:25.713648 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:59:25.715990 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:59:25.727195 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:59:25.730130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:59:25.732608 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:59:25.755484 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:59:25.755987 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:59:25.758040 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:59:25.761044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:59:26.357585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:59:26.359526 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:59:26.361084 systemd[1]: Startup finished in 785ms (kernel) + 6.643s (initrd) + 4.608s (userspace) = 12.037s. May 14 23:59:26.364431 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:59:26.771169 kubelet[1587]: E0514 23:59:26.771026 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:59:26.775470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:59:26.775688 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:59:28.405340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:59:28.406704 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:39290.service - OpenSSH per-connection server daemon (10.0.0.1:39290). May 14 23:59:28.464840 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 39290 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 14 23:59:28.466887 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:59:28.475673 systemd-logind[1492]: New session 1 of user core. May 14 23:59:28.476986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:59:28.488964 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:59:28.499811 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:59:28.502404 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:59:28.509849 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:59:28.611843 systemd[1604]: Queued start job for default target default.target. May 14 23:59:28.626985 systemd[1604]: Created slice app.slice - User Application Slice. May 14 23:59:28.627012 systemd[1604]: Reached target paths.target - Paths. May 14 23:59:28.627026 systemd[1604]: Reached target timers.target - Timers. May 14 23:59:28.628438 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:59:28.639051 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:59:28.639208 systemd[1604]: Reached target sockets.target - Sockets. May 14 23:59:28.639229 systemd[1604]: Reached target basic.target - Basic System. May 14 23:59:28.639277 systemd[1604]: Reached target default.target - Main User Target. May 14 23:59:28.639314 systemd[1604]: Startup finished in 123ms. May 14 23:59:28.639492 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:59:28.640996 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:59:28.702547 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:39302.service - OpenSSH per-connection server daemon (10.0.0.1:39302). May 14 23:59:28.745053 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 39302 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 14 23:59:28.746662 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:59:28.750703 systemd-logind[1492]: New session 2 of user core. May 14 23:59:28.756901 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:59:28.809693 sshd[1617]: Connection closed by 10.0.0.1 port 39302 May 14 23:59:28.810085 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 14 23:59:28.817516 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:39302.service: Deactivated successfully. May 14 23:59:28.819232 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:59:28.820854 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. May 14 23:59:28.834051 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:39314.service - OpenSSH per-connection server daemon (10.0.0.1:39314). May 14 23:59:28.834998 systemd-logind[1492]: Removed session 2. May 14 23:59:28.872455 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 39314 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 14 23:59:28.873876 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:59:28.877800 systemd-logind[1492]: New session 3 of user core. May 14 23:59:28.885891 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:59:28.934633 sshd[1624]: Connection closed by 10.0.0.1 port 39314 May 14 23:59:28.935034 sshd-session[1622]: pam_unix(sshd:session): session closed for user core May 14 23:59:28.950853 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:39314.service: Deactivated successfully. May 14 23:59:28.952664 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:59:28.954570 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. May 14 23:59:28.956089 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:39322.service - OpenSSH per-connection server daemon (10.0.0.1:39322). May 14 23:59:28.956929 systemd-logind[1492]: Removed session 3. May 14 23:59:29.011665 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 39322 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 14 23:59:29.013970 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:59:29.018519 systemd-logind[1492]: New session 4 of user core. May 14 23:59:29.028938 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:59:29.083053 sshd[1631]: Connection closed by 10.0.0.1 port 39322 May 14 23:59:29.083428 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 14 23:59:29.096555 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:39322.service: Deactivated successfully. May 14 23:59:29.098414 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:59:29.100119 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. May 14 23:59:29.114039 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:39330.service - OpenSSH per-connection server daemon (10.0.0.1:39330). May 14 23:59:29.114950 systemd-logind[1492]: Removed session 4. May 14 23:59:29.151291 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 39330 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 14 23:59:29.152731 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:59:29.156569 systemd-logind[1492]: New session 5 of user core. May 14 23:59:29.165935 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:59:29.223393 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:59:29.223739 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:59:29.498995 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:59:29.499259 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:59:29.751286 dockerd[1659]: time="2025-05-14T23:59:29.751135370Z" level=info msg="Starting up" May 14 23:59:31.651301 dockerd[1659]: time="2025-05-14T23:59:31.651251431Z" level=info msg="Loading containers: start." May 14 23:59:32.108793 kernel: Initializing XFRM netlink socket May 14 23:59:32.190581 systemd-networkd[1439]: docker0: Link UP May 14 23:59:32.363178 dockerd[1659]: time="2025-05-14T23:59:32.363125001Z" level=info msg="Loading containers: done." May 14 23:59:32.381075 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4233812471-merged.mount: Deactivated successfully. May 14 23:59:32.465158 dockerd[1659]: time="2025-05-14T23:59:32.465031110Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:59:32.465286 dockerd[1659]: time="2025-05-14T23:59:32.465156946Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 14 23:59:32.465349 dockerd[1659]: time="2025-05-14T23:59:32.465314993Z" level=info msg="Daemon has completed initialization" May 14 23:59:32.931254 dockerd[1659]: time="2025-05-14T23:59:32.931085558Z" level=info msg="API listen on /run/docker.sock" May 14 23:59:32.931321 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:59:33.756218 containerd[1501]: time="2025-05-14T23:59:33.756175610Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:59:35.989343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605121548.mount: Deactivated successfully. May 14 23:59:36.788965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:59:36.802910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:59:37.006993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:59:37.016368 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:59:37.487819 kubelet[1870]: E0514 23:59:37.487747 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:59:37.493739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:59:37.493981 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:59:44.490259 containerd[1501]: time="2025-05-14T23:59:44.490198464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:44.544017 containerd[1501]: time="2025-05-14T23:59:44.543975703Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 23:59:44.591991 containerd[1501]: time="2025-05-14T23:59:44.591926379Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:44.615129 containerd[1501]: time="2025-05-14T23:59:44.615091487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:44.616130 containerd[1501]: time="2025-05-14T23:59:44.616081405Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 10.859864477s" May 14 23:59:44.616130 containerd[1501]: time="2025-05-14T23:59:44.616124105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 23:59:44.631222 containerd[1501]: time="2025-05-14T23:59:44.631197920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:59:47.539054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:59:47.547920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:59:47.684789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:59:47.689188 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:59:47.741500 kubelet[1939]: E0514 23:59:47.741441 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:59:47.745422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:59:47.745654 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:59:48.913584 containerd[1501]: time="2025-05-14T23:59:48.913526272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:48.951192 containerd[1501]: time="2025-05-14T23:59:48.951141965Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 23:59:49.011257 containerd[1501]: time="2025-05-14T23:59:49.011181134Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:49.082310 containerd[1501]: time="2025-05-14T23:59:49.082247094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:49.083318 containerd[1501]: time="2025-05-14T23:59:49.083260165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 4.452033521s" May 14 23:59:49.083318 containerd[1501]: time="2025-05-14T23:59:49.083300250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 23:59:49.083818 containerd[1501]: time="2025-05-14T23:59:49.083763680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:59:53.795683 containerd[1501]: time="2025-05-14T23:59:53.795629128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:53.850032 containerd[1501]: time="2025-05-14T23:59:53.849941481Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 23:59:53.913788 containerd[1501]: time="2025-05-14T23:59:53.913693166Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:53.962526 containerd[1501]: time="2025-05-14T23:59:53.962463580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:59:53.963503 containerd[1501]: time="2025-05-14T23:59:53.963458728Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 4.879649342s" May 14 23:59:53.963503 containerd[1501]: time="2025-05-14T23:59:53.963502520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 23:59:53.964012 containerd[1501]: time="2025-05-14T23:59:53.963990045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:59:57.433355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992635777.mount: Deactivated successfully. May 14 23:59:57.789107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:59:57.796022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:59:57.938615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:59:57.944002 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:59:57.987501 kubelet[1967]: E0514 23:59:57.987396 1967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:59:57.991818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:59:57.992024 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:00:00.400735 containerd[1501]: time="2025-05-15T00:00:00.400650661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:00.446492 containerd[1501]: time="2025-05-15T00:00:00.446434659Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 15 00:00:00.493778 containerd[1501]: time="2025-05-15T00:00:00.493719414Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:00.504553 containerd[1501]: time="2025-05-15T00:00:00.504517981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:00.505138 containerd[1501]: time="2025-05-15T00:00:00.505098193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 6.541083612s" May 15 00:00:00.505138 containerd[1501]: time="2025-05-15T00:00:00.505125094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 00:00:00.505757 containerd[1501]: time="2025-05-15T00:00:00.505586451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:00:03.690054 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 15 00:00:03.691314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545086855.mount: Deactivated successfully. May 15 00:00:03.791860 systemd[1]: logrotate.service: Deactivated successfully. May 15 00:00:08.039067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 00:00:08.048037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:08.191968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:08.197469 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:00:08.488668 kubelet[1995]: E0515 00:00:08.488511 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:00:08.492728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:00:08.492950 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:00:10.325080 update_engine[1494]: I20250515 00:00:10.324987 1494 update_attempter.cc:509] Updating boot flags... May 15 00:00:10.358949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2047) May 15 00:00:10.391259 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2050) May 15 00:00:10.426821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2050) May 15 00:00:10.687148 containerd[1501]: time="2025-05-15T00:00:10.687009640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:10.688367 containerd[1501]: time="2025-05-15T00:00:10.688313552Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 00:00:10.690147 containerd[1501]: time="2025-05-15T00:00:10.690088425Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:10.693834 containerd[1501]: time="2025-05-15T00:00:10.693749540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:10.695599 containerd[1501]: time="2025-05-15T00:00:10.695537898Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 10.189917753s" May 15 00:00:10.695663 containerd[1501]: time="2025-05-15T00:00:10.695596568Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 00:00:10.696390 containerd[1501]: time="2025-05-15T00:00:10.696338058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:00:11.454192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075243264.mount: Deactivated successfully. May 15 00:00:11.461679 containerd[1501]: time="2025-05-15T00:00:11.461648974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:11.462519 containerd[1501]: time="2025-05-15T00:00:11.462471637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 00:00:11.463567 containerd[1501]: time="2025-05-15T00:00:11.463538963Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:11.465970 containerd[1501]: time="2025-05-15T00:00:11.465936819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:11.466586 containerd[1501]: time="2025-05-15T00:00:11.466552352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 770.177362ms" May 15 00:00:11.466586 containerd[1501]: time="2025-05-15T00:00:11.466577339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:00:11.467064 containerd[1501]: time="2025-05-15T00:00:11.467005257Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:00:12.876796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146001820.mount: Deactivated successfully. May 15 00:00:18.539002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 15 00:00:18.547932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:18.763677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:18.769505 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:00:18.813824 kubelet[2103]: E0515 00:00:18.812884 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:00:18.817210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:00:18.817455 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:00:22.595189 containerd[1501]: time="2025-05-15T00:00:22.595119366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:22.616138 containerd[1501]: time="2025-05-15T00:00:22.616058455Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 00:00:22.648984 containerd[1501]: time="2025-05-15T00:00:22.648945502Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:22.664898 containerd[1501]: time="2025-05-15T00:00:22.664820942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:00:22.665986 containerd[1501]: time="2025-05-15T00:00:22.665952942Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 11.198923449s" May 15 00:00:22.665986 containerd[1501]: time="2025-05-15T00:00:22.665983880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 00:00:25.152293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:25.162971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:25.188191 systemd[1]: Reloading requested from client PID 2159 ('systemctl') (unit session-5.scope)... May 15 00:00:25.188208 systemd[1]: Reloading... May 15 00:00:25.284805 zram_generator::config[2201]: No configuration found. May 15 00:00:26.254941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:00:26.339006 systemd[1]: Reloading finished in 1150 ms. May 15 00:00:26.387483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 00:00:26.387581 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 00:00:26.388016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:26.389675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:26.550911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:26.556227 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:00:26.599517 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:00:26.599517 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:00:26.599517 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:00:26.600689 kubelet[2246]: I0515 00:00:26.600632 2246 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:00:26.712448 kubelet[2246]: I0515 00:00:26.712395 2246 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:00:26.712448 kubelet[2246]: I0515 00:00:26.712428 2246 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:00:26.712684 kubelet[2246]: I0515 00:00:26.712664 2246 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:00:26.734092 kubelet[2246]: I0515 00:00:26.734046 2246 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:00:26.734254 kubelet[2246]: E0515 00:00:26.734110 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:26.742865 kubelet[2246]: E0515 00:00:26.742821 2246 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:00:26.742865 kubelet[2246]: I0515 00:00:26.742856 2246 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:00:26.748687 kubelet[2246]: I0515 00:00:26.748632 2246 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:00:26.749619 kubelet[2246]: I0515 00:00:26.749586 2246 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:00:26.749789 kubelet[2246]: I0515 00:00:26.749741 2246 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:00:26.749971 kubelet[2246]: I0515 00:00:26.749789 2246 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:00:26.749971 kubelet[2246]: I0515 00:00:26.749953 2246 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:00:26.749971 kubelet[2246]: I0515 00:00:26.749963 2246 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:00:26.750130 kubelet[2246]: I0515 00:00:26.750089 2246 state_mem.go:36] "Initialized new in-memory state store" May 15 00:00:26.751539 kubelet[2246]: I0515 00:00:26.751511 2246 kubelet.go:408] "Attempting to sync node with API server" May 15 00:00:26.751539 kubelet[2246]: I0515 00:00:26.751536 2246 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:00:26.751624 kubelet[2246]: I0515 00:00:26.751573 2246 kubelet.go:314] "Adding apiserver pod source" May 15 00:00:26.751624 kubelet[2246]: I0515 00:00:26.751595 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:00:26.754738 kubelet[2246]: W0515 00:00:26.754687 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:26.754738 kubelet[2246]: E0515 00:00:26.754735 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:26.755829 kubelet[2246]: W0515 00:00:26.755723 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:26.755885 kubelet[2246]: E0515 00:00:26.755848 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:26.757062 kubelet[2246]: I0515 00:00:26.757033 2246 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:00:26.758491 kubelet[2246]: I0515 00:00:26.758474 2246 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:00:26.759318 kubelet[2246]: W0515 00:00:26.759295 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:00:26.760719 kubelet[2246]: I0515 00:00:26.760182 2246 server.go:1269] "Started kubelet" May 15 00:00:26.760719 kubelet[2246]: I0515 00:00:26.760263 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:00:26.760719 kubelet[2246]: I0515 00:00:26.760656 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:00:26.761251 kubelet[2246]: I0515 00:00:26.761223 2246 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:00:26.761343 kubelet[2246]: I0515 00:00:26.761307 2246 server.go:460] "Adding debug handlers to kubelet server" May 15 00:00:26.761564 kubelet[2246]: I0515 00:00:26.761535 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:00:26.762200 kubelet[2246]: I0515 00:00:26.762178 2246 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:00:26.762825 kubelet[2246]: I0515 00:00:26.762299 2246 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:00:26.762825 kubelet[2246]: I0515 00:00:26.762351 2246 reconciler.go:26] "Reconciler: start to sync state" May 15 00:00:26.762825 kubelet[2246]: W0515 00:00:26.762620 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:26.762825 kubelet[2246]: E0515 00:00:26.762664 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:26.762825 kubelet[2246]: E0515 00:00:26.762712 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:26.763200 kubelet[2246]: E0515 00:00:26.763143 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" May 15 00:00:26.766451 kubelet[2246]: I0515 00:00:26.764616 2246 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:00:26.766451 kubelet[2246]: I0515 00:00:26.764999 2246 factory.go:221] Registration of the systemd container factory successfully May 15 00:00:26.766451 kubelet[2246]: I0515 00:00:26.765079 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:00:26.768564 kubelet[2246]: I0515 00:00:26.768155 2246 factory.go:221] Registration of the containerd container factory successfully May 15 00:00:26.771987 kubelet[2246]: E0515 00:00:26.766679 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8a490ab89a08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,LastTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:00:26.782067 kubelet[2246]: I0515 00:00:26.782022 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:00:26.782067 kubelet[2246]: I0515 00:00:26.782043 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:00:26.782067 kubelet[2246]: I0515 00:00:26.782069 2246 state_mem.go:36] "Initialized new in-memory state store" May 15 00:00:26.785982 kubelet[2246]: I0515 00:00:26.785915 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:00:26.787482 kubelet[2246]: I0515 00:00:26.787447 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:00:26.787482 kubelet[2246]: I0515 00:00:26.787469 2246 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:00:26.787587 kubelet[2246]: I0515 00:00:26.787491 2246 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:00:26.787587 kubelet[2246]: E0515 00:00:26.787532 2246 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:00:26.789539 kubelet[2246]: W0515 00:00:26.788984 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:26.789539 kubelet[2246]: E0515 00:00:26.789057 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:26.863270 kubelet[2246]: E0515 00:00:26.863079 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:26.888482 kubelet[2246]: E0515 00:00:26.888410 2246 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:00:26.964197 kubelet[2246]: E0515 00:00:26.964143 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:26.964582 kubelet[2246]: E0515 00:00:26.964543 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" May 15 00:00:27.065121 kubelet[2246]: E0515 00:00:27.065063 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:27.088715 kubelet[2246]: E0515 00:00:27.088642 2246 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:00:27.165353 kubelet[2246]: E0515 00:00:27.165181 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:27.265373 kubelet[2246]: E0515 00:00:27.265306 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:27.356854 kubelet[2246]: I0515 00:00:27.356763 2246 policy_none.go:49] "None policy: Start" May 15 00:00:27.357734 kubelet[2246]: I0515 00:00:27.357712 2246 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:00:27.357819 kubelet[2246]: I0515 00:00:27.357743 2246 state_mem.go:35] "Initializing new in-memory state store" May 15 00:00:27.365404 kubelet[2246]: E0515 00:00:27.365369 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:27.365563 kubelet[2246]: E0515 00:00:27.365463 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" May 15 00:00:27.410493 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:00:27.421159 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:00:27.424742 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:00:27.437884 kubelet[2246]: I0515 00:00:27.437845 2246 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:00:27.438109 kubelet[2246]: I0515 00:00:27.438095 2246 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:00:27.438148 kubelet[2246]: I0515 00:00:27.438109 2246 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:00:27.438681 kubelet[2246]: I0515 00:00:27.438314 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:00:27.439469 kubelet[2246]: E0515 00:00:27.439446 2246 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:00:27.469891 kubelet[2246]: E0515 00:00:27.469706 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8a490ab89a08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,LastTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:00:27.497249 systemd[1]: Created slice kubepods-burstable-pode4fc9344942f10a9f39031456cc2fc2c.slice - libcontainer container kubepods-burstable-pode4fc9344942f10a9f39031456cc2fc2c.slice. May 15 00:00:27.520089 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 00:00:27.530831 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 00:00:27.539547 kubelet[2246]: I0515 00:00:27.539510 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:27.539952 kubelet[2246]: E0515 00:00:27.539921 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:27.566639 kubelet[2246]: I0515 00:00:27.566571 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:27.566639 kubelet[2246]: I0515 00:00:27.566636 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:00:27.566639 kubelet[2246]: I0515 00:00:27.566660 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:27.566890 kubelet[2246]: I0515 00:00:27.566684 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:27.566890 kubelet[2246]: I0515 00:00:27.566702 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:27.566890 kubelet[2246]: I0515 00:00:27.566717 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:27.566890 kubelet[2246]: I0515 00:00:27.566734 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:27.566890 kubelet[2246]: I0515 00:00:27.566751 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:27.567024 kubelet[2246]: I0515 00:00:27.566802 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:27.742670 kubelet[2246]: I0515 00:00:27.742029 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:27.743218 kubelet[2246]: E0515 00:00:27.743167 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:27.818418 kubelet[2246]: E0515 00:00:27.818303 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:27.819111 containerd[1501]: time="2025-05-15T00:00:27.819072217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4fc9344942f10a9f39031456cc2fc2c,Namespace:kube-system,Attempt:0,}" May 15 00:00:27.829519 kubelet[2246]: E0515 00:00:27.829463 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:27.830189 containerd[1501]: time="2025-05-15T00:00:27.830123549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 00:00:27.833676 kubelet[2246]: E0515 00:00:27.833633 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:27.834213 containerd[1501]: time="2025-05-15T00:00:27.834029604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 00:00:27.846570 kubelet[2246]: W0515 00:00:27.846528 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:27.846710 kubelet[2246]: E0515 00:00:27.846586 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:27.978586 kubelet[2246]: W0515 00:00:27.978497 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:27.978586 kubelet[2246]: E0515 00:00:27.978588 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:28.010867 kubelet[2246]: W0515 00:00:28.008511 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:28.010867 kubelet[2246]: E0515 00:00:28.008599 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:28.144923 kubelet[2246]: I0515 00:00:28.144888 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:28.145391 kubelet[2246]: E0515 00:00:28.145336 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:28.166050 kubelet[2246]: E0515 00:00:28.165987 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="1.6s" May 15 00:00:28.194971 kubelet[2246]: W0515 00:00:28.194881 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:28.194971 kubelet[2246]: E0515 00:00:28.194977 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:28.849065 kubelet[2246]: E0515 00:00:28.849002 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:28.947008 kubelet[2246]: I0515 00:00:28.946955 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:28.947376 kubelet[2246]: E0515 00:00:28.947352 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:29.699426 kubelet[2246]: W0515 00:00:29.699354 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:29.699426 kubelet[2246]: E0515 00:00:29.699415 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:29.766723 kubelet[2246]: E0515 00:00:29.766655 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="3.2s" May 15 00:00:30.545860 kubelet[2246]: W0515 00:00:30.545805 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:30.545860 kubelet[2246]: E0515 00:00:30.545857 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:30.549156 kubelet[2246]: I0515 00:00:30.549121 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:30.549578 kubelet[2246]: E0515 00:00:30.549542 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:30.557246 kubelet[2246]: W0515 00:00:30.557210 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:30.557287 kubelet[2246]: E0515 00:00:30.557263 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:30.841302 kubelet[2246]: W0515 00:00:30.841162 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:30.841302 kubelet[2246]: E0515 00:00:30.841218 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:32.676360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562741138.mount: Deactivated successfully. May 15 00:00:32.934566 kubelet[2246]: E0515 00:00:32.934411 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:32.967568 kubelet[2246]: E0515 00:00:32.967493 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="6.4s" May 15 00:00:33.321059 containerd[1501]: time="2025-05-15T00:00:33.320981498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:00:33.392437 containerd[1501]: time="2025-05-15T00:00:33.392343752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:00:33.487009 containerd[1501]: time="2025-05-15T00:00:33.486932715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 00:00:33.542386 containerd[1501]: time="2025-05-15T00:00:33.542290976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:00:33.713340 containerd[1501]: time="2025-05-15T00:00:33.713163002Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:00:33.751689 kubelet[2246]: I0515 00:00:33.751661 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:33.752153 kubelet[2246]: E0515 00:00:33.752102 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 15 00:00:33.815255 containerd[1501]: time="2025-05-15T00:00:33.815166921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:00:33.922530 containerd[1501]: time="2025-05-15T00:00:33.922439149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:00:34.017855 containerd[1501]: time="2025-05-15T00:00:34.017692118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:00:34.018724 containerd[1501]: time="2025-05-15T00:00:34.018673843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.184564507s" May 15 00:00:34.019466 containerd[1501]: time="2025-05-15T00:00:34.019440042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.200250436s" May 15 00:00:34.209175 containerd[1501]: time="2025-05-15T00:00:34.209104211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.378856959s" May 15 00:00:34.246316 kubelet[2246]: W0515 00:00:34.246255 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:34.246734 kubelet[2246]: E0515 00:00:34.246327 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:34.526933 kubelet[2246]: W0515 00:00:34.526865 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:34.526933 kubelet[2246]: E0515 00:00:34.526926 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:34.635005 kubelet[2246]: W0515 00:00:34.634948 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:34.635075 kubelet[2246]: E0515 00:00:34.635009 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:34.832142 kubelet[2246]: W0515 00:00:34.832007 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 15 00:00:34.832142 kubelet[2246]: E0515 00:00:34.832079 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" May 15 00:00:35.438426 containerd[1501]: time="2025-05-15T00:00:35.437652818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:00:35.438426 containerd[1501]: time="2025-05-15T00:00:35.438404300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:00:35.438426 containerd[1501]: time="2025-05-15T00:00:35.438418567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.438895 containerd[1501]: time="2025-05-15T00:00:35.438500891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.485996 systemd[1]: Started cri-containerd-a07509f333ffabc81f49edc3dd5a635e316fb518e9adc18518a6077a022e30ee.scope - libcontainer container a07509f333ffabc81f49edc3dd5a635e316fb518e9adc18518a6077a022e30ee. May 15 00:00:35.496620 containerd[1501]: time="2025-05-15T00:00:35.496414174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:00:35.496620 containerd[1501]: time="2025-05-15T00:00:35.496476391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:00:35.496620 containerd[1501]: time="2025-05-15T00:00:35.496490097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.496620 containerd[1501]: time="2025-05-15T00:00:35.496567282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.518981 systemd[1]: Started cri-containerd-028713bdcccda64857613c41a3abb4087a26a2a1c7d45c56318cf34627efff23.scope - libcontainer container 028713bdcccda64857613c41a3abb4087a26a2a1c7d45c56318cf34627efff23. May 15 00:00:35.561528 containerd[1501]: time="2025-05-15T00:00:35.561448897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a07509f333ffabc81f49edc3dd5a635e316fb518e9adc18518a6077a022e30ee\"" May 15 00:00:35.562868 kubelet[2246]: E0515 00:00:35.562745 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:35.564918 containerd[1501]: time="2025-05-15T00:00:35.564795525Z" level=info msg="CreateContainer within sandbox \"a07509f333ffabc81f49edc3dd5a635e316fb518e9adc18518a6077a022e30ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:00:35.571751 containerd[1501]: time="2025-05-15T00:00:35.571623875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:00:35.571751 containerd[1501]: time="2025-05-15T00:00:35.571675572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:00:35.571751 containerd[1501]: time="2025-05-15T00:00:35.571686202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.571930 containerd[1501]: time="2025-05-15T00:00:35.571843316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:35.586059 containerd[1501]: time="2025-05-15T00:00:35.586008852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4fc9344942f10a9f39031456cc2fc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"028713bdcccda64857613c41a3abb4087a26a2a1c7d45c56318cf34627efff23\"" May 15 00:00:35.587397 kubelet[2246]: E0515 00:00:35.586899 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:35.588303 containerd[1501]: time="2025-05-15T00:00:35.588272556Z" level=info msg="CreateContainer within sandbox \"028713bdcccda64857613c41a3abb4087a26a2a1c7d45c56318cf34627efff23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:00:35.602007 systemd[1]: Started cri-containerd-c73f29705123dec9db1913de438b789e7dd7b4d4073af2fab044e1c51492d07a.scope - libcontainer container c73f29705123dec9db1913de438b789e7dd7b4d4073af2fab044e1c51492d07a. May 15 00:00:35.645683 containerd[1501]: time="2025-05-15T00:00:35.645628420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c73f29705123dec9db1913de438b789e7dd7b4d4073af2fab044e1c51492d07a\"" May 15 00:00:35.646678 kubelet[2246]: E0515 00:00:35.646637 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:35.648380 containerd[1501]: time="2025-05-15T00:00:35.648347669Z" level=info msg="CreateContainer within sandbox \"c73f29705123dec9db1913de438b789e7dd7b4d4073af2fab044e1c51492d07a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:00:37.439753 kubelet[2246]: E0515 00:00:37.439696 2246 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:00:37.471516 kubelet[2246]: E0515 00:00:37.471389 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8a490ab89a08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,LastTimestamp:2025-05-15 00:00:26.760157704 +0000 UTC m=+0.199895447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:00:37.547388 containerd[1501]: time="2025-05-15T00:00:37.547263477Z" level=info msg="CreateContainer within sandbox \"a07509f333ffabc81f49edc3dd5a635e316fb518e9adc18518a6077a022e30ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0000748e3788268b369b800f0884ef3e04092d289b293a8c095914302357b70\"" May 15 00:00:37.571682 containerd[1501]: time="2025-05-15T00:00:37.548283072Z" level=info msg="StartContainer for \"f0000748e3788268b369b800f0884ef3e04092d289b293a8c095914302357b70\"" May 15 00:00:37.578026 systemd[1]: Started cri-containerd-f0000748e3788268b369b800f0884ef3e04092d289b293a8c095914302357b70.scope - libcontainer container f0000748e3788268b369b800f0884ef3e04092d289b293a8c095914302357b70. May 15 00:00:37.698392 containerd[1501]: time="2025-05-15T00:00:37.698151723Z" level=info msg="StartContainer for \"f0000748e3788268b369b800f0884ef3e04092d289b293a8c095914302357b70\" returns successfully" May 15 00:00:37.790325 containerd[1501]: time="2025-05-15T00:00:37.790247784Z" level=info msg="CreateContainer within sandbox \"c73f29705123dec9db1913de438b789e7dd7b4d4073af2fab044e1c51492d07a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f3e74a7ba0eeec043a4bf7f0ac078ade90868462ee2875b14a090833395c12f2\"" May 15 00:00:37.790848 containerd[1501]: time="2025-05-15T00:00:37.790808978Z" level=info msg="StartContainer for \"f3e74a7ba0eeec043a4bf7f0ac078ade90868462ee2875b14a090833395c12f2\"" May 15 00:00:37.811911 kubelet[2246]: E0515 00:00:37.811413 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:37.829062 systemd[1]: Started cri-containerd-f3e74a7ba0eeec043a4bf7f0ac078ade90868462ee2875b14a090833395c12f2.scope - libcontainer container f3e74a7ba0eeec043a4bf7f0ac078ade90868462ee2875b14a090833395c12f2. May 15 00:00:38.092858 containerd[1501]: time="2025-05-15T00:00:38.092458029Z" level=info msg="StartContainer for \"f3e74a7ba0eeec043a4bf7f0ac078ade90868462ee2875b14a090833395c12f2\" returns successfully" May 15 00:00:38.092858 containerd[1501]: time="2025-05-15T00:00:38.092493105Z" level=info msg="CreateContainer within sandbox \"028713bdcccda64857613c41a3abb4087a26a2a1c7d45c56318cf34627efff23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89c3e77b6613024aca2af02c3fd15a2eaab8665b9888aa3bf095195fd71f26dc\"" May 15 00:00:38.094198 containerd[1501]: time="2025-05-15T00:00:38.093128779Z" level=info msg="StartContainer for \"89c3e77b6613024aca2af02c3fd15a2eaab8665b9888aa3bf095195fd71f26dc\"" May 15 00:00:38.132950 systemd[1]: Started cri-containerd-89c3e77b6613024aca2af02c3fd15a2eaab8665b9888aa3bf095195fd71f26dc.scope - libcontainer container 89c3e77b6613024aca2af02c3fd15a2eaab8665b9888aa3bf095195fd71f26dc. May 15 00:00:38.338846 containerd[1501]: time="2025-05-15T00:00:38.338244317Z" level=info msg="StartContainer for \"89c3e77b6613024aca2af02c3fd15a2eaab8665b9888aa3bf095195fd71f26dc\" returns successfully" May 15 00:00:38.814532 kubelet[2246]: E0515 00:00:38.814386 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:38.815894 kubelet[2246]: E0515 00:00:38.815875 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:38.816409 kubelet[2246]: E0515 00:00:38.816381 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:39.817992 kubelet[2246]: E0515 00:00:39.817957 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:39.817992 kubelet[2246]: E0515 00:00:39.817984 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:40.154328 kubelet[2246]: I0515 00:00:40.154225 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:40.166815 kubelet[2246]: E0515 00:00:40.166554 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:00:40.521957 kubelet[2246]: E0515 00:00:40.521817 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:40.540246 kubelet[2246]: I0515 00:00:40.540183 2246 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:00:40.540246 kubelet[2246]: E0515 00:00:40.540232 2246 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:00:40.784843 kubelet[2246]: E0515 00:00:40.784605 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:40.819327 kubelet[2246]: E0515 00:00:40.819292 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:40.884841 kubelet[2246]: E0515 00:00:40.884791 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:40.985460 kubelet[2246]: E0515 00:00:40.985395 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.086121 kubelet[2246]: E0515 00:00:41.086046 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.186675 kubelet[2246]: E0515 00:00:41.186624 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.287685 kubelet[2246]: E0515 00:00:41.287640 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.388447 kubelet[2246]: E0515 00:00:41.388301 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.489253 kubelet[2246]: E0515 00:00:41.489193 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.589941 kubelet[2246]: E0515 00:00:41.589851 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.691122 kubelet[2246]: E0515 00:00:41.690975 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.791762 kubelet[2246]: E0515 00:00:41.791692 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.892107 kubelet[2246]: E0515 00:00:41.892049 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:41.993080 kubelet[2246]: E0515 00:00:41.992855 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.094059 kubelet[2246]: E0515 00:00:42.093986 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.194672 kubelet[2246]: E0515 00:00:42.194604 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.295332 kubelet[2246]: E0515 00:00:42.295260 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.396100 kubelet[2246]: E0515 00:00:42.396018 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.497062 kubelet[2246]: E0515 00:00:42.496987 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.598095 kubelet[2246]: E0515 00:00:42.597935 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.698741 kubelet[2246]: E0515 00:00:42.698649 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.799365 kubelet[2246]: E0515 00:00:42.799314 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:42.900297 kubelet[2246]: E0515 00:00:42.900168 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.000745 kubelet[2246]: E0515 00:00:43.000661 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.101190 kubelet[2246]: E0515 00:00:43.101129 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.201729 kubelet[2246]: E0515 00:00:43.201577 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.302794 kubelet[2246]: E0515 00:00:43.302695 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.403368 kubelet[2246]: E0515 00:00:43.403315 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.503809 kubelet[2246]: E0515 00:00:43.503493 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.604397 kubelet[2246]: E0515 00:00:43.604223 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.704999 kubelet[2246]: E0515 00:00:43.704939 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.805626 kubelet[2246]: E0515 00:00:43.805556 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:43.906249 kubelet[2246]: E0515 00:00:43.906169 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.006855 kubelet[2246]: E0515 00:00:44.006799 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.107641 kubelet[2246]: E0515 00:00:44.107463 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.208295 kubelet[2246]: E0515 00:00:44.208232 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.309113 kubelet[2246]: E0515 00:00:44.309065 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.410084 kubelet[2246]: E0515 00:00:44.409864 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.510759 kubelet[2246]: E0515 00:00:44.510679 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.611081 kubelet[2246]: E0515 00:00:44.611005 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.712338 kubelet[2246]: E0515 00:00:44.712163 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.813049 kubelet[2246]: E0515 00:00:44.812936 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:44.913743 kubelet[2246]: E0515 00:00:44.913663 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.014157 kubelet[2246]: E0515 00:00:45.013847 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.021146 systemd[1]: Reloading requested from client PID 2529 ('systemctl') (unit session-5.scope)... May 15 00:00:45.021165 systemd[1]: Reloading... May 15 00:00:45.116803 kubelet[2246]: E0515 00:00:45.115204 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.121719 zram_generator::config[2568]: No configuration found. May 15 00:00:45.215584 kubelet[2246]: E0515 00:00:45.215527 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.255209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:00:45.260527 kubelet[2246]: E0515 00:00:45.260494 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:45.316288 kubelet[2246]: E0515 00:00:45.316235 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.352952 systemd[1]: Reloading finished in 331 ms. May 15 00:00:45.401261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:45.422558 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:00:45.422963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:45.439382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:00:45.595177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:00:45.600584 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:00:45.652467 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:00:45.652467 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:00:45.652467 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:00:45.652905 kubelet[2613]: I0515 00:00:45.652523 2613 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:00:45.659914 kubelet[2613]: I0515 00:00:45.659872 2613 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:00:45.659914 kubelet[2613]: I0515 00:00:45.659902 2613 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:00:45.660951 kubelet[2613]: I0515 00:00:45.660910 2613 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:00:45.662656 kubelet[2613]: I0515 00:00:45.662620 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:00:45.664965 kubelet[2613]: I0515 00:00:45.664923 2613 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:00:45.668800 kubelet[2613]: E0515 00:00:45.668745 2613 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:00:45.668800 kubelet[2613]: I0515 00:00:45.668794 2613 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:00:45.674153 kubelet[2613]: I0515 00:00:45.674108 2613 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:00:45.674538 kubelet[2613]: I0515 00:00:45.674509 2613 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:00:45.674752 kubelet[2613]: I0515 00:00:45.674699 2613 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:00:45.674939 kubelet[2613]: I0515 00:00:45.674736 2613 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:00:45.675023 kubelet[2613]: I0515 00:00:45.674945 2613 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:00:45.675023 kubelet[2613]: I0515 00:00:45.674956 2613 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:00:45.675023 kubelet[2613]: I0515 00:00:45.674990 2613 state_mem.go:36] "Initialized new in-memory state store" May 15 00:00:45.675124 kubelet[2613]: I0515 00:00:45.675109 2613 kubelet.go:408] "Attempting to sync node with API server" May 15 00:00:45.675124 kubelet[2613]: I0515 00:00:45.675123 2613 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:00:45.675176 kubelet[2613]: I0515 00:00:45.675153 2613 kubelet.go:314] "Adding apiserver pod source" May 15 00:00:45.675176 kubelet[2613]: I0515 00:00:45.675168 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:00:45.675874 kubelet[2613]: I0515 00:00:45.675734 2613 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:00:45.678787 kubelet[2613]: I0515 00:00:45.676892 2613 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:00:45.678787 kubelet[2613]: I0515 00:00:45.677401 2613 server.go:1269] "Started kubelet" May 15 00:00:45.678787 kubelet[2613]: I0515 00:00:45.677921 2613 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:00:45.678787 kubelet[2613]: I0515 00:00:45.678036 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:00:45.678787 kubelet[2613]: I0515 00:00:45.678340 2613 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:00:45.679536 kubelet[2613]: I0515 00:00:45.679519 2613 server.go:460] "Adding debug handlers to kubelet server" May 15 00:00:45.683981 kubelet[2613]: I0515 00:00:45.683956 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:00:45.685056 kubelet[2613]: I0515 00:00:45.685023 2613 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:00:45.687091 kubelet[2613]: I0515 00:00:45.687063 2613 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:00:45.687287 kubelet[2613]: E0515 00:00:45.687261 2613 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:00:45.688789 kubelet[2613]: I0515 00:00:45.688744 2613 factory.go:221] Registration of the systemd container factory successfully May 15 00:00:45.688879 kubelet[2613]: I0515 00:00:45.688852 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:00:45.689188 kubelet[2613]: E0515 00:00:45.689166 2613 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:00:45.689227 kubelet[2613]: I0515 00:00:45.689190 2613 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:00:45.691175 kubelet[2613]: I0515 00:00:45.691154 2613 reconciler.go:26] "Reconciler: start to sync state" May 15 00:00:45.694652 kubelet[2613]: I0515 00:00:45.694617 2613 factory.go:221] Registration of the containerd container factory successfully May 15 00:00:45.702048 kubelet[2613]: I0515 00:00:45.702002 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:00:45.703452 kubelet[2613]: I0515 00:00:45.703413 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:00:45.703512 kubelet[2613]: I0515 00:00:45.703475 2613 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:00:45.703512 kubelet[2613]: I0515 00:00:45.703499 2613 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:00:45.703583 kubelet[2613]: E0515 00:00:45.703553 2613 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:00:45.736202 kubelet[2613]: I0515 00:00:45.736155 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:00:45.736202 kubelet[2613]: I0515 00:00:45.736181 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:00:45.736202 kubelet[2613]: I0515 00:00:45.736207 2613 state_mem.go:36] "Initialized new in-memory state store" May 15 00:00:45.736384 kubelet[2613]: I0515 00:00:45.736357 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:00:45.736384 kubelet[2613]: I0515 00:00:45.736369 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:00:45.736432 kubelet[2613]: I0515 00:00:45.736388 2613 policy_none.go:49] "None policy: Start" May 15 00:00:45.737226 kubelet[2613]: I0515 00:00:45.737196 2613 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:00:45.737226 kubelet[2613]: I0515 00:00:45.737226 2613 state_mem.go:35] "Initializing new in-memory state store" May 15 00:00:45.737446 kubelet[2613]: I0515 00:00:45.737422 2613 state_mem.go:75] "Updated machine memory state" May 15 00:00:45.742157 kubelet[2613]: I0515 00:00:45.742120 2613 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:00:45.742395 kubelet[2613]: I0515 00:00:45.742343 2613 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:00:45.742395 kubelet[2613]: I0515 00:00:45.742364 2613 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:00:45.742630 kubelet[2613]: I0515 00:00:45.742601 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:00:45.847356 kubelet[2613]: I0515 00:00:45.847240 2613 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:00:45.891994 kubelet[2613]: I0515 00:00:45.891924 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:45.891994 kubelet[2613]: I0515 00:00:45.891985 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:45.891994 kubelet[2613]: I0515 00:00:45.892013 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:45.892225 kubelet[2613]: I0515 00:00:45.892029 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:45.892225 kubelet[2613]: I0515 00:00:45.892063 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:45.892225 kubelet[2613]: I0515 00:00:45.892082 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:45.892225 kubelet[2613]: I0515 00:00:45.892102 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:00:45.892225 kubelet[2613]: I0515 00:00:45.892118 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4fc9344942f10a9f39031456cc2fc2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4fc9344942f10a9f39031456cc2fc2c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:00:45.892387 kubelet[2613]: I0515 00:00:45.892134 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:00:45.960229 kubelet[2613]: I0515 00:00:45.960188 2613 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 00:00:45.960375 kubelet[2613]: I0515 00:00:45.960308 2613 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:00:46.129746 kubelet[2613]: E0515 00:00:46.129583 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.129746 kubelet[2613]: E0515 00:00:46.129595 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.145207 kubelet[2613]: E0515 00:00:46.145143 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.676250 kubelet[2613]: I0515 00:00:46.676186 2613 apiserver.go:52] "Watching apiserver" May 15 00:00:46.689974 kubelet[2613]: I0515 00:00:46.689889 2613 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:00:46.713719 kubelet[2613]: E0515 00:00:46.713577 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.714435 kubelet[2613]: E0515 00:00:46.714409 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.800497 kubelet[2613]: E0515 00:00:46.800241 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:00:46.800497 kubelet[2613]: I0515 00:00:46.800338 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.800310429 podStartE2EDuration="1.800310429s" podCreationTimestamp="2025-05-15 00:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:00:46.800300701 +0000 UTC m=+1.191126915" watchObservedRunningTime="2025-05-15 00:00:46.800310429 +0000 UTC m=+1.191136643" May 15 00:00:46.800497 kubelet[2613]: E0515 00:00:46.800422 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:46.860841 kubelet[2613]: I0515 00:00:46.857787 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.857743738 podStartE2EDuration="1.857743738s" podCreationTimestamp="2025-05-15 00:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:00:46.845795744 +0000 UTC m=+1.236621958" watchObservedRunningTime="2025-05-15 00:00:46.857743738 +0000 UTC m=+1.248569952" May 15 00:00:46.875246 kubelet[2613]: I0515 00:00:46.875167 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.875147726 podStartE2EDuration="1.875147726s" podCreationTimestamp="2025-05-15 00:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:00:46.857876457 +0000 UTC m=+1.248702691" watchObservedRunningTime="2025-05-15 00:00:46.875147726 +0000 UTC m=+1.265973940" May 15 00:00:47.165691 sudo[1639]: pam_unix(sudo:session): session closed for user root May 15 00:00:47.167119 sshd[1638]: Connection closed by 10.0.0.1 port 39330 May 15 00:00:47.167458 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 15 00:00:47.171215 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:39330.service: Deactivated successfully. May 15 00:00:47.173275 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:00:47.173498 systemd[1]: session-5.scope: Consumed 4.113s CPU time, 154.1M memory peak, 0B memory swap peak. May 15 00:00:47.174126 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. May 15 00:00:47.175474 systemd-logind[1492]: Removed session 5. May 15 00:00:47.715126 kubelet[2613]: E0515 00:00:47.715077 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:49.852693 kubelet[2613]: E0515 00:00:49.851966 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:50.719108 kubelet[2613]: E0515 00:00:50.719075 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:51.662705 kubelet[2613]: E0515 00:00:51.662656 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:51.719795 kubelet[2613]: E0515 00:00:51.719744 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:52.168901 kubelet[2613]: E0515 00:00:52.168866 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:52.720951 kubelet[2613]: E0515 00:00:52.720868 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:53.485224 kubelet[2613]: I0515 00:00:53.485193 2613 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:00:53.485508 containerd[1501]: time="2025-05-15T00:00:53.485477080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:00:53.485929 kubelet[2613]: I0515 00:00:53.485701 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:00:54.996303 systemd[1]: Created slice kubepods-besteffort-pod3ddd49ac_f322_40ac_af44_a237d242a164.slice - libcontainer container kubepods-besteffort-pod3ddd49ac_f322_40ac_af44_a237d242a164.slice. May 15 00:00:55.041296 kubelet[2613]: I0515 00:00:55.041230 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ddd49ac-f322-40ac-af44-a237d242a164-xtables-lock\") pod \"kube-proxy-x5jcp\" (UID: \"3ddd49ac-f322-40ac-af44-a237d242a164\") " pod="kube-system/kube-proxy-x5jcp" May 15 00:00:55.041296 kubelet[2613]: I0515 00:00:55.041291 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsqm\" (UniqueName: \"kubernetes.io/projected/3ddd49ac-f322-40ac-af44-a237d242a164-kube-api-access-5xsqm\") pod \"kube-proxy-x5jcp\" (UID: \"3ddd49ac-f322-40ac-af44-a237d242a164\") " pod="kube-system/kube-proxy-x5jcp" May 15 00:00:55.041296 kubelet[2613]: I0515 00:00:55.041316 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ddd49ac-f322-40ac-af44-a237d242a164-kube-proxy\") pod \"kube-proxy-x5jcp\" (UID: \"3ddd49ac-f322-40ac-af44-a237d242a164\") " pod="kube-system/kube-proxy-x5jcp" May 15 00:00:55.041963 kubelet[2613]: I0515 00:00:55.041335 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ddd49ac-f322-40ac-af44-a237d242a164-lib-modules\") pod \"kube-proxy-x5jcp\" (UID: \"3ddd49ac-f322-40ac-af44-a237d242a164\") " pod="kube-system/kube-proxy-x5jcp" May 15 00:00:55.598966 systemd[1]: Created slice kubepods-burstable-pod0a2b08de_67b7_4549_ab65_d8c36798d790.slice - libcontainer container kubepods-burstable-pod0a2b08de_67b7_4549_ab65_d8c36798d790.slice. May 15 00:00:55.646318 kubelet[2613]: I0515 00:00:55.646251 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0a2b08de-67b7-4549-ab65-d8c36798d790-flannel-cfg\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:55.646318 kubelet[2613]: I0515 00:00:55.646301 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2b08de-67b7-4549-ab65-d8c36798d790-xtables-lock\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:55.646542 kubelet[2613]: I0515 00:00:55.646341 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0a2b08de-67b7-4549-ab65-d8c36798d790-run\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:55.646542 kubelet[2613]: I0515 00:00:55.646365 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0a2b08de-67b7-4549-ab65-d8c36798d790-cni-plugin\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:55.646542 kubelet[2613]: I0515 00:00:55.646388 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0a2b08de-67b7-4549-ab65-d8c36798d790-cni\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:55.646542 kubelet[2613]: I0515 00:00:55.646412 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jd7v\" (UniqueName: \"kubernetes.io/projected/0a2b08de-67b7-4549-ab65-d8c36798d790-kube-api-access-2jd7v\") pod \"kube-flannel-ds-njg5j\" (UID: \"0a2b08de-67b7-4549-ab65-d8c36798d790\") " pod="kube-flannel/kube-flannel-ds-njg5j" May 15 00:00:56.202575 kubelet[2613]: E0515 00:00:56.202513 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:56.203304 containerd[1501]: time="2025-05-15T00:00:56.203219269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-njg5j,Uid:0a2b08de-67b7-4549-ab65-d8c36798d790,Namespace:kube-flannel,Attempt:0,}" May 15 00:00:56.207704 kubelet[2613]: E0515 00:00:56.207661 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:56.208211 containerd[1501]: time="2025-05-15T00:00:56.208176414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5jcp,Uid:3ddd49ac-f322-40ac-af44-a237d242a164,Namespace:kube-system,Attempt:0,}" May 15 00:00:57.459602 containerd[1501]: time="2025-05-15T00:00:57.459467765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:00:57.459602 containerd[1501]: time="2025-05-15T00:00:57.459538588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:00:57.459602 containerd[1501]: time="2025-05-15T00:00:57.459552674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:57.460138 containerd[1501]: time="2025-05-15T00:00:57.459642012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:57.491941 systemd[1]: Started cri-containerd-d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154.scope - libcontainer container d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154. May 15 00:00:57.548379 containerd[1501]: time="2025-05-15T00:00:57.548334269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-njg5j,Uid:0a2b08de-67b7-4549-ab65-d8c36798d790,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\"" May 15 00:00:57.549129 kubelet[2613]: E0515 00:00:57.549100 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:57.550051 containerd[1501]: time="2025-05-15T00:00:57.550017589Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 15 00:00:57.615197 containerd[1501]: time="2025-05-15T00:00:57.615091063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:00:57.615197 containerd[1501]: time="2025-05-15T00:00:57.615166143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:00:57.615197 containerd[1501]: time="2025-05-15T00:00:57.615177745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:57.615391 containerd[1501]: time="2025-05-15T00:00:57.615261482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:00:57.641939 systemd[1]: Started cri-containerd-5f7ceec533d5b2f4b1666256e202ed6dcfc88a7ab38c72aeaf5dd88d6ef23d48.scope - libcontainer container 5f7ceec533d5b2f4b1666256e202ed6dcfc88a7ab38c72aeaf5dd88d6ef23d48. May 15 00:00:57.665269 containerd[1501]: time="2025-05-15T00:00:57.665226498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5jcp,Uid:3ddd49ac-f322-40ac-af44-a237d242a164,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f7ceec533d5b2f4b1666256e202ed6dcfc88a7ab38c72aeaf5dd88d6ef23d48\"" May 15 00:00:57.666073 kubelet[2613]: E0515 00:00:57.666037 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:57.667910 containerd[1501]: time="2025-05-15T00:00:57.667874399Z" level=info msg="CreateContainer within sandbox \"5f7ceec533d5b2f4b1666256e202ed6dcfc88a7ab38c72aeaf5dd88d6ef23d48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:00:58.353514 containerd[1501]: time="2025-05-15T00:00:58.353401474Z" level=info msg="CreateContainer within sandbox \"5f7ceec533d5b2f4b1666256e202ed6dcfc88a7ab38c72aeaf5dd88d6ef23d48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e77857fb39fd167751a52d2c6d2d57135d3ecc247098a679baa1b590f72a6afe\"" May 15 00:00:58.354229 containerd[1501]: time="2025-05-15T00:00:58.354157163Z" level=info msg="StartContainer for \"e77857fb39fd167751a52d2c6d2d57135d3ecc247098a679baa1b590f72a6afe\"" May 15 00:00:58.387954 systemd[1]: Started cri-containerd-e77857fb39fd167751a52d2c6d2d57135d3ecc247098a679baa1b590f72a6afe.scope - libcontainer container e77857fb39fd167751a52d2c6d2d57135d3ecc247098a679baa1b590f72a6afe. May 15 00:00:58.633253 containerd[1501]: time="2025-05-15T00:00:58.633135441Z" level=info msg="StartContainer for \"e77857fb39fd167751a52d2c6d2d57135d3ecc247098a679baa1b590f72a6afe\" returns successfully" May 15 00:00:58.730830 kubelet[2613]: E0515 00:00:58.730797 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:00:59.061730 kubelet[2613]: I0515 00:00:59.061650 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x5jcp" podStartSLOduration=5.061630758 podStartE2EDuration="5.061630758s" podCreationTimestamp="2025-05-15 00:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:00:59.061392461 +0000 UTC m=+13.452218675" watchObservedRunningTime="2025-05-15 00:00:59.061630758 +0000 UTC m=+13.452456972" May 15 00:00:59.732224 kubelet[2613]: E0515 00:00:59.732182 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:02.740924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623666496.mount: Deactivated successfully. May 15 00:01:03.699612 containerd[1501]: time="2025-05-15T00:01:03.699540068Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:03.730126 containerd[1501]: time="2025-05-15T00:01:03.730066495Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" May 15 00:01:03.758841 containerd[1501]: time="2025-05-15T00:01:03.758794008Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:03.796891 containerd[1501]: time="2025-05-15T00:01:03.796839679Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:03.797908 containerd[1501]: time="2025-05-15T00:01:03.797871240Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 6.247811371s" May 15 00:01:03.797957 containerd[1501]: time="2025-05-15T00:01:03.797909153Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 15 00:01:03.800339 containerd[1501]: time="2025-05-15T00:01:03.799828991Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 15 00:01:04.239451 containerd[1501]: time="2025-05-15T00:01:04.239386563Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e\"" May 15 00:01:04.239790 containerd[1501]: time="2025-05-15T00:01:04.239725123Z" level=info msg="StartContainer for \"b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e\"" May 15 00:01:04.335942 systemd[1]: Started cri-containerd-b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e.scope - libcontainer container b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e. May 15 00:01:04.410234 systemd[1]: cri-containerd-b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e.scope: Deactivated successfully. May 15 00:01:04.445916 containerd[1501]: time="2025-05-15T00:01:04.445843869Z" level=info msg="StartContainer for \"b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e\" returns successfully" May 15 00:01:04.472293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e-rootfs.mount: Deactivated successfully. May 15 00:01:04.658912 containerd[1501]: time="2025-05-15T00:01:04.658819385Z" level=info msg="shim disconnected" id=b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e namespace=k8s.io May 15 00:01:04.658912 containerd[1501]: time="2025-05-15T00:01:04.658898598Z" level=warning msg="cleaning up after shim disconnected" id=b0a510dc506b031cbfc60d45f9424b6ee49a83233d747384a4c8e2e379aa901e namespace=k8s.io May 15 00:01:04.658912 containerd[1501]: time="2025-05-15T00:01:04.658910661Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:01:04.747379 kubelet[2613]: E0515 00:01:04.747336 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:04.748610 containerd[1501]: time="2025-05-15T00:01:04.748207840Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 15 00:01:06.584308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242043903.mount: Deactivated successfully. May 15 00:01:07.190881 containerd[1501]: time="2025-05-15T00:01:07.190800847Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:07.191829 containerd[1501]: time="2025-05-15T00:01:07.191788479Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 15 00:01:07.193877 containerd[1501]: time="2025-05-15T00:01:07.193797890Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:07.198784 containerd[1501]: time="2025-05-15T00:01:07.198690916Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:07.199816 containerd[1501]: time="2025-05-15T00:01:07.199739756Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.451491518s" May 15 00:01:07.199889 containerd[1501]: time="2025-05-15T00:01:07.199818557Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 15 00:01:07.202652 containerd[1501]: time="2025-05-15T00:01:07.202609885Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:01:07.226194 containerd[1501]: time="2025-05-15T00:01:07.226128351Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40\"" May 15 00:01:07.226728 containerd[1501]: time="2025-05-15T00:01:07.226696901Z" level=info msg="StartContainer for \"6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40\"" May 15 00:01:07.280005 systemd[1]: Started cri-containerd-6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40.scope - libcontainer container 6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40. May 15 00:01:07.312367 systemd[1]: cri-containerd-6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40.scope: Deactivated successfully. May 15 00:01:07.316362 containerd[1501]: time="2025-05-15T00:01:07.316290440Z" level=info msg="StartContainer for \"6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40\" returns successfully" May 15 00:01:07.321629 kubelet[2613]: I0515 00:01:07.321583 2613 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:01:07.358587 systemd[1]: Created slice kubepods-burstable-pod22f4ab3f_4f50_4b0b_b662_171861888d83.slice - libcontainer container kubepods-burstable-pod22f4ab3f_4f50_4b0b_b662_171861888d83.slice. May 15 00:01:07.369279 systemd[1]: Created slice kubepods-burstable-podea9d392a_d733_409a_9d35_c3be02a27d1a.slice - libcontainer container kubepods-burstable-podea9d392a_d733_409a_9d35_c3be02a27d1a.slice. May 15 00:01:07.492001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40-rootfs.mount: Deactivated successfully. May 15 00:01:07.518935 kubelet[2613]: I0515 00:01:07.518866 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22f4ab3f-4f50-4b0b-b662-171861888d83-config-volume\") pod \"coredns-6f6b679f8f-wzppc\" (UID: \"22f4ab3f-4f50-4b0b-b662-171861888d83\") " pod="kube-system/coredns-6f6b679f8f-wzppc" May 15 00:01:07.518935 kubelet[2613]: I0515 00:01:07.518927 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9d392a-d733-409a-9d35-c3be02a27d1a-config-volume\") pod \"coredns-6f6b679f8f-x7kd4\" (UID: \"ea9d392a-d733-409a-9d35-c3be02a27d1a\") " pod="kube-system/coredns-6f6b679f8f-x7kd4" May 15 00:01:07.519164 kubelet[2613]: I0515 00:01:07.518955 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7wn\" (UniqueName: \"kubernetes.io/projected/ea9d392a-d733-409a-9d35-c3be02a27d1a-kube-api-access-qc7wn\") pod \"coredns-6f6b679f8f-x7kd4\" (UID: \"ea9d392a-d733-409a-9d35-c3be02a27d1a\") " pod="kube-system/coredns-6f6b679f8f-x7kd4" May 15 00:01:07.519164 kubelet[2613]: I0515 00:01:07.518981 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrgr\" (UniqueName: \"kubernetes.io/projected/22f4ab3f-4f50-4b0b-b662-171861888d83-kube-api-access-xcrgr\") pod \"coredns-6f6b679f8f-wzppc\" (UID: \"22f4ab3f-4f50-4b0b-b662-171861888d83\") " pod="kube-system/coredns-6f6b679f8f-wzppc" May 15 00:01:07.737844 containerd[1501]: time="2025-05-15T00:01:07.737762282Z" level=info msg="shim disconnected" id=6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40 namespace=k8s.io May 15 00:01:07.737844 containerd[1501]: time="2025-05-15T00:01:07.737837496Z" level=warning msg="cleaning up after shim disconnected" id=6361cd4b2aaf3d42889e1ef5f8272ed94937edccacfcece8b154730970633d40 namespace=k8s.io May 15 00:01:07.737844 containerd[1501]: time="2025-05-15T00:01:07.737846704Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:01:07.756414 kubelet[2613]: E0515 00:01:07.755448 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:07.965302 kubelet[2613]: E0515 00:01:07.965248 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:07.965935 containerd[1501]: time="2025-05-15T00:01:07.965891252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzppc,Uid:22f4ab3f-4f50-4b0b-b662-171861888d83,Namespace:kube-system,Attempt:0,}" May 15 00:01:07.972795 kubelet[2613]: E0515 00:01:07.972721 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:07.973382 containerd[1501]: time="2025-05-15T00:01:07.973330107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7kd4,Uid:ea9d392a-d733-409a-9d35-c3be02a27d1a,Namespace:kube-system,Attempt:0,}" May 15 00:01:08.019431 containerd[1501]: time="2025-05-15T00:01:08.019250690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7kd4,Uid:ea9d392a-d733-409a-9d35-c3be02a27d1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bea3325fd169eba624358b22eeec16a5a7c9a4bd1a2c5b574d59a1bd4dd0999b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 00:01:08.019572 kubelet[2613]: E0515 00:01:08.019518 2613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea3325fd169eba624358b22eeec16a5a7c9a4bd1a2c5b574d59a1bd4dd0999b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 00:01:08.019637 kubelet[2613]: E0515 00:01:08.019604 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea3325fd169eba624358b22eeec16a5a7c9a4bd1a2c5b574d59a1bd4dd0999b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-x7kd4" May 15 00:01:08.019637 kubelet[2613]: E0515 00:01:08.019624 2613 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea3325fd169eba624358b22eeec16a5a7c9a4bd1a2c5b574d59a1bd4dd0999b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-x7kd4" May 15 00:01:08.019713 kubelet[2613]: E0515 00:01:08.019671 2613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-x7kd4_kube-system(ea9d392a-d733-409a-9d35-c3be02a27d1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-x7kd4_kube-system(ea9d392a-d733-409a-9d35-c3be02a27d1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea3325fd169eba624358b22eeec16a5a7c9a4bd1a2c5b574d59a1bd4dd0999b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-x7kd4" podUID="ea9d392a-d733-409a-9d35-c3be02a27d1a" May 15 00:01:08.020354 containerd[1501]: time="2025-05-15T00:01:08.020309828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzppc,Uid:22f4ab3f-4f50-4b0b-b662-171861888d83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 00:01:08.020506 kubelet[2613]: E0515 00:01:08.020457 2613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 00:01:08.020506 kubelet[2613]: E0515 00:01:08.020498 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-wzppc" May 15 00:01:08.020622 kubelet[2613]: E0515 00:01:08.020517 2613 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-wzppc" May 15 00:01:08.020622 kubelet[2613]: E0515 00:01:08.020562 2613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wzppc_kube-system(22f4ab3f-4f50-4b0b-b662-171861888d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wzppc_kube-system(22f4ab3f-4f50-4b0b-b662-171861888d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-wzppc" podUID="22f4ab3f-4f50-4b0b-b662-171861888d83" May 15 00:01:08.491725 systemd[1]: run-netns-cni\x2d2c6bcf94\x2d2d25\x2d6a2a\x2df931\x2db3131ba4a15e.mount: Deactivated successfully. May 15 00:01:08.491848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba8829231a3b3bf4531c4744ec496e96ab127a2955bdccd0c0b2d30e445c3318-shm.mount: Deactivated successfully. May 15 00:01:08.759678 kubelet[2613]: E0515 00:01:08.759532 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:08.761116 containerd[1501]: time="2025-05-15T00:01:08.761086653Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 15 00:01:09.029016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487039995.mount: Deactivated successfully. May 15 00:01:09.483069 containerd[1501]: time="2025-05-15T00:01:09.482997462Z" level=info msg="CreateContainer within sandbox \"d8c36df79a6d53ccdd087e3dd44b3e0d7b841fbf321d65173d87f65c31383154\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e0ad3029454a9c5e68347762b8d373e7aa646df47c852ec994a955b42e6f5e52\"" May 15 00:01:09.483632 containerd[1501]: time="2025-05-15T00:01:09.483521095Z" level=info msg="StartContainer for \"e0ad3029454a9c5e68347762b8d373e7aa646df47c852ec994a955b42e6f5e52\"" May 15 00:01:09.513944 systemd[1]: Started cri-containerd-e0ad3029454a9c5e68347762b8d373e7aa646df47c852ec994a955b42e6f5e52.scope - libcontainer container e0ad3029454a9c5e68347762b8d373e7aa646df47c852ec994a955b42e6f5e52. May 15 00:01:09.824792 containerd[1501]: time="2025-05-15T00:01:09.824716227Z" level=info msg="StartContainer for \"e0ad3029454a9c5e68347762b8d373e7aa646df47c852ec994a955b42e6f5e52\" returns successfully" May 15 00:01:09.828351 kubelet[2613]: E0515 00:01:09.828328 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:10.219595 kubelet[2613]: I0515 00:01:10.219424 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-njg5j" podStartSLOduration=6.5679838539999995 podStartE2EDuration="16.219403206s" podCreationTimestamp="2025-05-15 00:00:54 +0000 UTC" firstStartedPulling="2025-05-15 00:00:57.549603372 +0000 UTC m=+11.940429586" lastFinishedPulling="2025-05-15 00:01:07.201022724 +0000 UTC m=+21.591848938" observedRunningTime="2025-05-15 00:01:10.219023009 +0000 UTC m=+24.609849223" watchObservedRunningTime="2025-05-15 00:01:10.219403206 +0000 UTC m=+24.610229420" May 15 00:01:10.830330 kubelet[2613]: E0515 00:01:10.830282 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:10.916101 systemd-networkd[1439]: flannel.1: Link UP May 15 00:01:10.916166 systemd-networkd[1439]: flannel.1: Gained carrier May 15 00:01:12.590057 systemd-networkd[1439]: flannel.1: Gained IPv6LL May 15 00:01:22.704487 kubelet[2613]: E0515 00:01:22.704413 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:22.705140 kubelet[2613]: E0515 00:01:22.704566 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:22.705183 containerd[1501]: time="2025-05-15T00:01:22.705031175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzppc,Uid:22f4ab3f-4f50-4b0b-b662-171861888d83,Namespace:kube-system,Attempt:0,}" May 15 00:01:22.705520 containerd[1501]: time="2025-05-15T00:01:22.705483325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7kd4,Uid:ea9d392a-d733-409a-9d35-c3be02a27d1a,Namespace:kube-system,Attempt:0,}" May 15 00:01:24.525200 systemd-networkd[1439]: cni0: Link UP May 15 00:01:24.525214 systemd-networkd[1439]: cni0: Gained carrier May 15 00:01:24.530143 systemd-networkd[1439]: cni0: Lost carrier May 15 00:01:24.536036 systemd-networkd[1439]: veth844bde02: Link UP May 15 00:01:24.538685 kernel: cni0: port 1(veth844bde02) entered blocking state May 15 00:01:24.538808 kernel: cni0: port 1(veth844bde02) entered disabled state May 15 00:01:24.540956 kernel: veth844bde02: entered allmulticast mode May 15 00:01:24.541015 kernel: veth844bde02: entered promiscuous mode May 15 00:01:24.542569 kernel: cni0: port 1(veth844bde02) entered blocking state May 15 00:01:24.542628 kernel: cni0: port 1(veth844bde02) entered forwarding state May 15 00:01:24.546791 kernel: cni0: port 1(veth844bde02) entered disabled state May 15 00:01:24.547467 systemd-networkd[1439]: vethb03c785b: Link UP May 15 00:01:24.559868 kernel: cni0: port 1(veth844bde02) entered blocking state May 15 00:01:24.559959 kernel: cni0: port 1(veth844bde02) entered forwarding state May 15 00:01:24.562882 kernel: cni0: port 2(vethb03c785b) entered blocking state May 15 00:01:24.562945 kernel: cni0: port 2(vethb03c785b) entered disabled state May 15 00:01:24.563885 kernel: vethb03c785b: entered allmulticast mode May 15 00:01:24.565286 kernel: vethb03c785b: entered promiscuous mode May 15 00:01:24.566060 systemd-networkd[1439]: veth844bde02: Gained carrier May 15 00:01:24.566543 systemd-networkd[1439]: cni0: Gained carrier May 15 00:01:24.570027 containerd[1501]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} May 15 00:01:24.570027 containerd[1501]: delegateAdd: netconf sent to delegate plugin: May 15 00:01:24.575565 kernel: cni0: port 2(vethb03c785b) entered blocking state May 15 00:01:24.575672 kernel: cni0: port 2(vethb03c785b) entered forwarding state May 15 00:01:24.575218 systemd-networkd[1439]: vethb03c785b: Gained carrier May 15 00:01:24.577999 containerd[1501]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 15 00:01:24.577999 containerd[1501]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} May 15 00:01:24.577999 containerd[1501]: delegateAdd: netconf sent to delegate plugin: May 15 00:01:24.596829 containerd[1501]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-15T00:01:24.596464161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:01:24.596829 containerd[1501]: time="2025-05-15T00:01:24.596594079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:01:24.596829 containerd[1501]: time="2025-05-15T00:01:24.596628164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:01:24.599167 containerd[1501]: time="2025-05-15T00:01:24.599082521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:01:24.608193 containerd[1501]: time="2025-05-15T00:01:24.600827559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:01:24.608193 containerd[1501]: time="2025-05-15T00:01:24.600891501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:01:24.608193 containerd[1501]: time="2025-05-15T00:01:24.600920166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:01:24.608193 containerd[1501]: time="2025-05-15T00:01:24.601010598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:01:24.621937 systemd[1]: Started cri-containerd-13f8cd7d0e9a854c382884ea48aa3a3246fc30430df185733ef61c8a92f606eb.scope - libcontainer container 13f8cd7d0e9a854c382884ea48aa3a3246fc30430df185733ef61c8a92f606eb. May 15 00:01:24.626807 systemd[1]: Started cri-containerd-0663fe60565ebc1a7aa9fec93ad117b42fb6ba3cadfad411f9836b0523368481.scope - libcontainer container 0663fe60565ebc1a7aa9fec93ad117b42fb6ba3cadfad411f9836b0523368481. May 15 00:01:24.640508 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:01:24.643334 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:01:24.671933 containerd[1501]: time="2025-05-15T00:01:24.671871109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x7kd4,Uid:ea9d392a-d733-409a-9d35-c3be02a27d1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"13f8cd7d0e9a854c382884ea48aa3a3246fc30430df185733ef61c8a92f606eb\"" May 15 00:01:24.672864 kubelet[2613]: E0515 00:01:24.672839 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:24.674007 containerd[1501]: time="2025-05-15T00:01:24.673781733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzppc,Uid:22f4ab3f-4f50-4b0b-b662-171861888d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"0663fe60565ebc1a7aa9fec93ad117b42fb6ba3cadfad411f9836b0523368481\"" May 15 00:01:24.675192 containerd[1501]: time="2025-05-15T00:01:24.675168921Z" level=info msg="CreateContainer within sandbox \"13f8cd7d0e9a854c382884ea48aa3a3246fc30430df185733ef61c8a92f606eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:01:24.675240 kubelet[2613]: E0515 00:01:24.675209 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:24.677942 containerd[1501]: time="2025-05-15T00:01:24.677702349Z" level=info msg="CreateContainer within sandbox \"0663fe60565ebc1a7aa9fec93ad117b42fb6ba3cadfad411f9836b0523368481\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:01:25.074554 containerd[1501]: time="2025-05-15T00:01:25.074498666Z" level=info msg="CreateContainer within sandbox \"13f8cd7d0e9a854c382884ea48aa3a3246fc30430df185733ef61c8a92f606eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c47df16a4e6ea9adf8e6e6cbbeeeb01b3cedd31b12637cef967964e139475aa\"" May 15 00:01:25.075380 containerd[1501]: time="2025-05-15T00:01:25.075309177Z" level=info msg="StartContainer for \"5c47df16a4e6ea9adf8e6e6cbbeeeb01b3cedd31b12637cef967964e139475aa\"" May 15 00:01:25.076801 containerd[1501]: time="2025-05-15T00:01:25.076708036Z" level=info msg="CreateContainer within sandbox \"0663fe60565ebc1a7aa9fec93ad117b42fb6ba3cadfad411f9836b0523368481\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6baf32ba07d29a0610bcafe2b94a46df45962dfb0b70b72dfba6be82ec99f27a\"" May 15 00:01:25.077844 containerd[1501]: time="2025-05-15T00:01:25.077210933Z" level=info msg="StartContainer for \"6baf32ba07d29a0610bcafe2b94a46df45962dfb0b70b72dfba6be82ec99f27a\"" May 15 00:01:25.106911 systemd[1]: Started cri-containerd-5c47df16a4e6ea9adf8e6e6cbbeeeb01b3cedd31b12637cef967964e139475aa.scope - libcontainer container 5c47df16a4e6ea9adf8e6e6cbbeeeb01b3cedd31b12637cef967964e139475aa. May 15 00:01:25.109311 systemd[1]: Started cri-containerd-6baf32ba07d29a0610bcafe2b94a46df45962dfb0b70b72dfba6be82ec99f27a.scope - libcontainer container 6baf32ba07d29a0610bcafe2b94a46df45962dfb0b70b72dfba6be82ec99f27a. May 15 00:01:25.501093 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:54442.service - OpenSSH per-connection server daemon (10.0.0.1:54442). May 15 00:01:25.582196 containerd[1501]: time="2025-05-15T00:01:25.582049740Z" level=info msg="StartContainer for \"6baf32ba07d29a0610bcafe2b94a46df45962dfb0b70b72dfba6be82ec99f27a\" returns successfully" May 15 00:01:25.582196 containerd[1501]: time="2025-05-15T00:01:25.582051864Z" level=info msg="StartContainer for \"5c47df16a4e6ea9adf8e6e6cbbeeeb01b3cedd31b12637cef967964e139475aa\" returns successfully" May 15 00:01:25.725239 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:25.727565 sshd-session[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:25.732007 systemd-logind[1492]: New session 6 of user core. May 15 00:01:25.745005 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:01:25.862053 kubelet[2613]: E0515 00:01:25.860732 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:25.866308 kubelet[2613]: E0515 00:01:25.866251 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:25.901969 systemd-networkd[1439]: cni0: Gained IPv6LL May 15 00:01:25.949991 kubelet[2613]: I0515 00:01:25.949903 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wzppc" podStartSLOduration=30.949883322 podStartE2EDuration="30.949883322s" podCreationTimestamp="2025-05-15 00:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:01:25.949815092 +0000 UTC m=+40.340641326" watchObservedRunningTime="2025-05-15 00:01:25.949883322 +0000 UTC m=+40.340709536" May 15 00:01:25.960516 sshd[3526]: Connection closed by 10.0.0.1 port 54442 May 15 00:01:25.960991 sshd-session[3524]: pam_unix(sshd:session): session closed for user core May 15 00:01:25.967973 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:54442.service: Deactivated successfully. May 15 00:01:25.968898 systemd-networkd[1439]: vethb03c785b: Gained IPv6LL May 15 00:01:25.971313 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:01:25.972270 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. May 15 00:01:25.973585 systemd-logind[1492]: Removed session 6. May 15 00:01:26.221968 systemd-networkd[1439]: veth844bde02: Gained IPv6LL May 15 00:01:26.867832 kubelet[2613]: E0515 00:01:26.867788 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:26.867832 kubelet[2613]: E0515 00:01:26.867762 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:27.869322 kubelet[2613]: E0515 00:01:27.869276 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:27.869811 kubelet[2613]: E0515 00:01:27.869348 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:01:30.973999 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:50548.service - OpenSSH per-connection server daemon (10.0.0.1:50548). May 15 00:01:31.020613 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 50548 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:31.022585 sshd-session[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:31.026745 systemd-logind[1492]: New session 7 of user core. May 15 00:01:31.042092 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:01:31.155058 sshd[3579]: Connection closed by 10.0.0.1 port 50548 May 15 00:01:31.155462 sshd-session[3571]: pam_unix(sshd:session): session closed for user core May 15 00:01:31.159502 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:50548.service: Deactivated successfully. May 15 00:01:31.161435 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:01:31.162136 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. May 15 00:01:31.163119 systemd-logind[1492]: Removed session 7. May 15 00:01:36.173418 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). May 15 00:01:36.223361 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:36.225616 sshd-session[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:36.231255 systemd-logind[1492]: New session 8 of user core. May 15 00:01:36.241030 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:01:36.367987 sshd[3630]: Connection closed by 10.0.0.1 port 50562 May 15 00:01:36.368474 sshd-session[3628]: pam_unix(sshd:session): session closed for user core May 15 00:01:36.373186 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:50562.service: Deactivated successfully. May 15 00:01:36.375275 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:01:36.376005 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. May 15 00:01:36.377103 systemd-logind[1492]: Removed session 8. May 15 00:01:41.380946 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:47358.service - OpenSSH per-connection server daemon (10.0.0.1:47358). May 15 00:01:41.433997 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 47358 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:41.436514 sshd-session[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:41.445197 systemd-logind[1492]: New session 9 of user core. May 15 00:01:41.455010 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:01:41.580759 sshd[3667]: Connection closed by 10.0.0.1 port 47358 May 15 00:01:41.581214 sshd-session[3665]: pam_unix(sshd:session): session closed for user core May 15 00:01:41.589369 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:47358.service: Deactivated successfully. May 15 00:01:41.591387 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:01:41.593173 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. May 15 00:01:41.605182 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). May 15 00:01:41.606425 systemd-logind[1492]: Removed session 9. May 15 00:01:41.650657 sshd[3680]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:41.652756 sshd-session[3680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:41.657489 systemd-logind[1492]: New session 10 of user core. May 15 00:01:41.668074 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:01:41.830961 sshd[3682]: Connection closed by 10.0.0.1 port 47366 May 15 00:01:41.831494 sshd-session[3680]: pam_unix(sshd:session): session closed for user core May 15 00:01:41.846344 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:47366.service: Deactivated successfully. May 15 00:01:41.849368 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:01:41.851647 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. May 15 00:01:41.859323 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:47368.service - OpenSSH per-connection server daemon (10.0.0.1:47368). May 15 00:01:41.860650 systemd-logind[1492]: Removed session 10. May 15 00:01:41.902619 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 47368 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:41.904722 sshd-session[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:41.909675 systemd-logind[1492]: New session 11 of user core. May 15 00:01:41.921968 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:01:42.042535 sshd[3696]: Connection closed by 10.0.0.1 port 47368 May 15 00:01:42.043200 sshd-session[3694]: pam_unix(sshd:session): session closed for user core May 15 00:01:42.047953 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:47368.service: Deactivated successfully. May 15 00:01:42.049966 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:01:42.050752 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. May 15 00:01:42.051982 systemd-logind[1492]: Removed session 11. May 15 00:01:47.056454 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:51678.service - OpenSSH per-connection server daemon (10.0.0.1:51678). May 15 00:01:47.102290 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 51678 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:47.104532 sshd-session[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:47.110011 systemd-logind[1492]: New session 12 of user core. May 15 00:01:47.127117 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:01:47.249111 sshd[3734]: Connection closed by 10.0.0.1 port 51678 May 15 00:01:47.249542 sshd-session[3732]: pam_unix(sshd:session): session closed for user core May 15 00:01:47.254888 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:51678.service: Deactivated successfully. May 15 00:01:47.257658 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:01:47.258471 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. May 15 00:01:47.260003 systemd-logind[1492]: Removed session 12. May 15 00:01:52.261151 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:51694.service - OpenSSH per-connection server daemon (10.0.0.1:51694). May 15 00:01:52.307569 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 51694 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:52.309444 sshd-session[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:52.314139 systemd-logind[1492]: New session 13 of user core. May 15 00:01:52.324991 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:01:52.437970 sshd[3769]: Connection closed by 10.0.0.1 port 51694 May 15 00:01:52.438442 sshd-session[3767]: pam_unix(sshd:session): session closed for user core May 15 00:01:52.442874 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:51694.service: Deactivated successfully. May 15 00:01:52.444861 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:01:52.445487 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. May 15 00:01:52.446411 systemd-logind[1492]: Removed session 13. May 15 00:01:57.450944 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). May 15 00:01:57.498088 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:57.499809 sshd-session[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:57.503864 systemd-logind[1492]: New session 14 of user core. May 15 00:01:57.511908 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:01:57.626571 sshd[3806]: Connection closed by 10.0.0.1 port 57860 May 15 00:01:57.627090 sshd-session[3804]: pam_unix(sshd:session): session closed for user core May 15 00:01:57.639052 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:57860.service: Deactivated successfully. May 15 00:01:57.641066 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:01:57.643245 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. May 15 00:01:57.649066 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:57874.service - OpenSSH per-connection server daemon (10.0.0.1:57874). May 15 00:01:57.650079 systemd-logind[1492]: Removed session 14. May 15 00:01:57.692406 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 57874 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:57.694396 sshd-session[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:57.699142 systemd-logind[1492]: New session 15 of user core. May 15 00:01:57.708041 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:01:58.028973 sshd[3821]: Connection closed by 10.0.0.1 port 57874 May 15 00:01:58.029278 sshd-session[3819]: pam_unix(sshd:session): session closed for user core May 15 00:01:58.041938 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:57874.service: Deactivated successfully. May 15 00:01:58.044413 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:01:58.046484 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. May 15 00:01:58.060400 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). May 15 00:01:58.061881 systemd-logind[1492]: Removed session 15. May 15 00:01:58.110830 sshd[3831]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:01:58.112890 sshd-session[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:58.118388 systemd-logind[1492]: New session 16 of user core. May 15 00:01:58.133015 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:02:00.694694 sshd[3833]: Connection closed by 10.0.0.1 port 57888 May 15 00:02:00.696105 sshd-session[3831]: pam_unix(sshd:session): session closed for user core May 15 00:02:00.704741 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:57888.service: Deactivated successfully. May 15 00:02:00.707240 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:02:00.708819 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. May 15 00:02:00.715221 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:57896.service - OpenSSH per-connection server daemon (10.0.0.1:57896). May 15 00:02:00.716171 systemd-logind[1492]: Removed session 16. May 15 00:02:00.755445 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 57896 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:00.757069 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:00.760705 systemd-logind[1492]: New session 17 of user core. May 15 00:02:00.766938 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:02:01.172198 sshd[3855]: Connection closed by 10.0.0.1 port 57896 May 15 00:02:01.174333 sshd-session[3853]: pam_unix(sshd:session): session closed for user core May 15 00:02:01.183751 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:57896.service: Deactivated successfully. May 15 00:02:01.186118 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:02:01.187989 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. May 15 00:02:01.195291 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:57908.service - OpenSSH per-connection server daemon (10.0.0.1:57908). May 15 00:02:01.196700 systemd-logind[1492]: Removed session 17. May 15 00:02:01.235795 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 57908 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:01.237674 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:01.242692 systemd-logind[1492]: New session 18 of user core. May 15 00:02:01.250986 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:02:01.360944 sshd[3886]: Connection closed by 10.0.0.1 port 57908 May 15 00:02:01.361310 sshd-session[3872]: pam_unix(sshd:session): session closed for user core May 15 00:02:01.364898 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:57908.service: Deactivated successfully. May 15 00:02:01.366685 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:02:01.367385 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. May 15 00:02:01.368303 systemd-logind[1492]: Removed session 18. May 15 00:02:03.704212 kubelet[2613]: E0515 00:02:03.704128 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:02:05.325990 update_engine[1494]: I20250515 00:02:05.325911 1494 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 00:02:05.325990 update_engine[1494]: I20250515 00:02:05.325969 1494 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 00:02:05.326489 update_engine[1494]: I20250515 00:02:05.326200 1494 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 00:02:05.326846 update_engine[1494]: I20250515 00:02:05.326809 1494 omaha_request_params.cc:62] Current group set to stable May 15 00:02:05.327468 update_engine[1494]: I20250515 00:02:05.327409 1494 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 00:02:05.327468 update_engine[1494]: I20250515 00:02:05.327445 1494 update_attempter.cc:643] Scheduling an action processor start. May 15 00:02:05.327468 update_engine[1494]: I20250515 00:02:05.327464 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 00:02:05.327684 update_engine[1494]: I20250515 00:02:05.327505 1494 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 00:02:05.327684 update_engine[1494]: I20250515 00:02:05.327591 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 00:02:05.327684 update_engine[1494]: I20250515 00:02:05.327603 1494 omaha_request_action.cc:272] Request: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: May 15 00:02:05.327684 update_engine[1494]: I20250515 00:02:05.327612 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:02:05.327981 locksmithd[1527]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 00:02:05.329679 update_engine[1494]: I20250515 00:02:05.329645 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:02:05.329991 update_engine[1494]: I20250515 00:02:05.329957 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:02:05.356725 update_engine[1494]: E20250515 00:02:05.356637 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:02:05.356884 update_engine[1494]: I20250515 00:02:05.356742 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 00:02:06.380031 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:57916.service - OpenSSH per-connection server daemon (10.0.0.1:57916). May 15 00:02:06.423022 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 57916 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:06.424983 sshd-session[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:06.429063 systemd-logind[1492]: New session 19 of user core. May 15 00:02:06.440991 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:02:06.559085 sshd[3924]: Connection closed by 10.0.0.1 port 57916 May 15 00:02:06.559603 sshd-session[3922]: pam_unix(sshd:session): session closed for user core May 15 00:02:06.565894 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:57916.service: Deactivated successfully. May 15 00:02:06.568998 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:02:06.570032 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. May 15 00:02:06.571138 systemd-logind[1492]: Removed session 19. May 15 00:02:11.570961 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:58894.service - OpenSSH per-connection server daemon (10.0.0.1:58894). May 15 00:02:11.615801 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 58894 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:11.617684 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:11.622157 systemd-logind[1492]: New session 20 of user core. May 15 00:02:11.631907 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:02:11.745664 sshd[3962]: Connection closed by 10.0.0.1 port 58894 May 15 00:02:11.746096 sshd-session[3960]: pam_unix(sshd:session): session closed for user core May 15 00:02:11.750318 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:58894.service: Deactivated successfully. May 15 00:02:11.752486 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:02:11.753160 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. May 15 00:02:11.754388 systemd-logind[1492]: Removed session 20. May 15 00:02:13.704662 kubelet[2613]: E0515 00:02:13.704584 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:02:15.327802 update_engine[1494]: I20250515 00:02:15.327704 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:02:15.328305 update_engine[1494]: I20250515 00:02:15.328114 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:02:15.328396 update_engine[1494]: I20250515 00:02:15.328363 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:02:15.356893 update_engine[1494]: E20250515 00:02:15.356807 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:02:15.357032 update_engine[1494]: I20250515 00:02:15.356925 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 00:02:16.759067 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:34286.service - OpenSSH per-connection server daemon (10.0.0.1:34286). May 15 00:02:16.803600 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 34286 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:16.805592 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:16.810856 systemd-logind[1492]: New session 21 of user core. May 15 00:02:16.818169 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:02:16.932738 sshd[3997]: Connection closed by 10.0.0.1 port 34286 May 15 00:02:16.933198 sshd-session[3995]: pam_unix(sshd:session): session closed for user core May 15 00:02:16.938162 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:34286.service: Deactivated successfully. May 15 00:02:16.940892 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:02:16.941675 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. May 15 00:02:16.942648 systemd-logind[1492]: Removed session 21. May 15 00:02:17.704671 kubelet[2613]: E0515 00:02:17.704624 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:02:17.704671 kubelet[2613]: E0515 00:02:17.704654 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:02:21.945056 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:34288.service - OpenSSH per-connection server daemon (10.0.0.1:34288). May 15 00:02:21.993626 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 34288 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:02:21.995476 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:02:22.000094 systemd-logind[1492]: New session 22 of user core. May 15 00:02:22.013044 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:02:22.136045 sshd[4032]: Connection closed by 10.0.0.1 port 34288 May 15 00:02:22.136546 sshd-session[4030]: pam_unix(sshd:session): session closed for user core May 15 00:02:22.141475 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:34288.service: Deactivated successfully. May 15 00:02:22.144076 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:02:22.144730 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. May 15 00:02:22.145682 systemd-logind[1492]: Removed session 22.