May 16 00:19:45.010244 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 16 00:19:45.010267 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:45.010278 kernel: BIOS-provided physical RAM map: May 16 00:19:45.010284 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:19:45.010290 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:19:45.010296 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:19:45.010303 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:19:45.010310 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:19:45.010316 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:19:45.010322 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:19:45.010330 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 00:19:45.010336 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:19:45.010342 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:19:45.010349 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:19:45.010356 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:19:45.010363 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:19:45.010372 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:19:45.010379 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:19:45.010385 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:19:45.010392 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:19:45.010398 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:19:45.010405 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:19:45.010411 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:19:45.010418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:19:45.010424 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:19:45.010431 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:19:45.010437 kernel: NX (Execute Disable) protection: active May 16 00:19:45.010447 kernel: APIC: Static calls initialized May 16 00:19:45.010453 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:19:45.010460 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:19:45.010466 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:19:45.010473 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:19:45.010479 kernel: extended physical RAM map: May 16 00:19:45.010486 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:19:45.010492 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:19:45.010499 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:19:45.010506 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:19:45.010512 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:19:45.010522 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:19:45.010528 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:19:45.010539 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 16 00:19:45.010546 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 16 00:19:45.010552 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 16 00:19:45.010559 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 16 00:19:45.010566 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 16 00:19:45.010576 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:19:45.010583 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:19:45.010590 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:19:45.010596 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:19:45.010603 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:19:45.010610 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:19:45.010617 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:19:45.010624 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:19:45.010631 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:19:45.010640 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:19:45.010647 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:19:45.010654 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:19:45.010661 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:19:45.010668 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:19:45.010675 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:19:45.010682 kernel: efi: EFI v2.7 by EDK II May 16 00:19:45.010701 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 16 00:19:45.010708 kernel: random: crng init done May 16 00:19:45.010715 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 00:19:45.010722 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 00:19:45.010731 kernel: secureboot: Secure boot disabled May 16 00:19:45.010738 kernel: SMBIOS 2.8 present. May 16 00:19:45.010745 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 00:19:45.010752 kernel: Hypervisor detected: KVM May 16 00:19:45.010759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:19:45.010766 kernel: kvm-clock: using sched offset of 2950474807 cycles May 16 00:19:45.010773 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:19:45.010781 kernel: tsc: Detected 2794.748 MHz processor May 16 00:19:45.010788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:19:45.010795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:19:45.010802 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 00:19:45.010812 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 00:19:45.010819 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:19:45.010826 kernel: Using GB pages for direct mapping May 16 00:19:45.010833 kernel: ACPI: Early table checksum verification disabled May 16 00:19:45.010840 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 00:19:45.010848 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 00:19:45.010858 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010868 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010877 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 00:19:45.010887 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010894 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010901 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010909 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:45.010916 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 00:19:45.010923 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 00:19:45.010930 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 00:19:45.010937 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 00:19:45.010946 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 00:19:45.010953 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 00:19:45.010960 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 00:19:45.010967 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 00:19:45.010974 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 00:19:45.010981 kernel: No NUMA configuration found May 16 00:19:45.010988 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 00:19:45.010995 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 16 00:19:45.011002 kernel: Zone ranges: May 16 00:19:45.011009 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:19:45.011019 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 00:19:45.011026 kernel: Normal empty May 16 00:19:45.011033 kernel: Movable zone start for each node May 16 00:19:45.011040 kernel: Early memory node ranges May 16 00:19:45.011047 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 00:19:45.011053 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 00:19:45.011060 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 00:19:45.011067 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 00:19:45.011074 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 00:19:45.011084 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 00:19:45.011091 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 16 00:19:45.011098 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 16 00:19:45.011105 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 00:19:45.011112 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:19:45.011120 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 00:19:45.011134 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 00:19:45.011144 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:19:45.011151 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 00:19:45.011158 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 00:19:45.011166 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 00:19:45.011287 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 00:19:45.011297 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 00:19:45.011305 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:19:45.011313 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:19:45.011321 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:19:45.011328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:19:45.011338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:19:45.011345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:19:45.011353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:19:45.011360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:19:45.011367 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:19:45.011375 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:19:45.011382 kernel: TSC deadline timer available May 16 00:19:45.011389 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:19:45.011397 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:19:45.011404 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:19:45.011414 kernel: kvm-guest: setup PV sched yield May 16 00:19:45.011421 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 00:19:45.011428 kernel: Booting paravirtualized kernel on KVM May 16 00:19:45.011436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:19:45.011443 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:19:45.011451 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:19:45.011458 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:19:45.011465 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:19:45.011472 kernel: kvm-guest: PV spinlocks enabled May 16 00:19:45.011482 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:19:45.011491 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:45.011499 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:19:45.011506 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:19:45.011514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:19:45.011521 kernel: Fallback order for Node 0: 0 May 16 00:19:45.011528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 16 00:19:45.011536 kernel: Policy zone: DMA32 May 16 00:19:45.011545 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:19:45.011553 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 175776K reserved, 0K cma-reserved) May 16 00:19:45.011561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:19:45.011568 kernel: ftrace: allocating 37950 entries in 149 pages May 16 00:19:45.011575 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:19:45.011583 kernel: Dynamic Preempt: voluntary May 16 00:19:45.011590 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:19:45.011598 kernel: rcu: RCU event tracing is enabled. May 16 00:19:45.011606 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:19:45.011616 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:19:45.011623 kernel: Rude variant of Tasks RCU enabled. May 16 00:19:45.011631 kernel: Tracing variant of Tasks RCU enabled. May 16 00:19:45.011638 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:19:45.011646 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:19:45.011653 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:19:45.011661 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:19:45.011668 kernel: Console: colour dummy device 80x25 May 16 00:19:45.011676 kernel: printk: console [ttyS0] enabled May 16 00:19:45.011701 kernel: ACPI: Core revision 20230628 May 16 00:19:45.011709 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:19:45.011717 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:19:45.011724 kernel: x2apic enabled May 16 00:19:45.011731 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:19:45.011739 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:19:45.011747 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:19:45.011754 kernel: kvm-guest: setup PV IPIs May 16 00:19:45.011761 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:19:45.011772 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:19:45.011779 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:19:45.011786 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:19:45.011794 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:19:45.011801 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:19:45.011809 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:19:45.011817 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:19:45.011824 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:19:45.011831 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:19:45.011841 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:19:45.011850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:19:45.011861 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:19:45.011871 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:19:45.011886 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:19:45.011900 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:19:45.011908 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:19:45.011915 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:19:45.011927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:19:45.011934 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:19:45.011941 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:19:45.011949 kernel: Freeing SMP alternatives memory: 32K May 16 00:19:45.011956 kernel: pid_max: default: 32768 minimum: 301 May 16 00:19:45.011964 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:19:45.011971 kernel: landlock: Up and running. May 16 00:19:45.011978 kernel: SELinux: Initializing. May 16 00:19:45.011986 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:19:45.011995 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:19:45.012003 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:19:45.012010 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:45.012018 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:45.012026 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:45.012033 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:19:45.012040 kernel: ... version: 0 May 16 00:19:45.012048 kernel: ... bit width: 48 May 16 00:19:45.012055 kernel: ... generic registers: 6 May 16 00:19:45.012065 kernel: ... value mask: 0000ffffffffffff May 16 00:19:45.012073 kernel: ... max period: 00007fffffffffff May 16 00:19:45.012080 kernel: ... fixed-purpose events: 0 May 16 00:19:45.012087 kernel: ... event mask: 000000000000003f May 16 00:19:45.012095 kernel: signal: max sigframe size: 1776 May 16 00:19:45.012102 kernel: rcu: Hierarchical SRCU implementation. May 16 00:19:45.012110 kernel: rcu: Max phase no-delay instances is 400. May 16 00:19:45.012117 kernel: smp: Bringing up secondary CPUs ... May 16 00:19:45.012124 kernel: smpboot: x86: Booting SMP configuration: May 16 00:19:45.012134 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:19:45.012141 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:19:45.012149 kernel: smpboot: Max logical packages: 1 May 16 00:19:45.012156 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:19:45.012164 kernel: devtmpfs: initialized May 16 00:19:45.012171 kernel: x86/mm: Memory block size: 128MB May 16 00:19:45.012178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 00:19:45.012186 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 00:19:45.012194 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 00:19:45.012203 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 00:19:45.012221 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 16 00:19:45.012228 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 00:19:45.012236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:19:45.012244 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:19:45.012252 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:19:45.012259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:19:45.012266 kernel: audit: initializing netlink subsys (disabled) May 16 00:19:45.012274 kernel: audit: type=2000 audit(1747354784.133:1): state=initialized audit_enabled=0 res=1 May 16 00:19:45.012284 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:19:45.012291 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:19:45.012299 kernel: cpuidle: using governor menu May 16 00:19:45.012306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:19:45.012313 kernel: dca service started, version 1.12.1 May 16 00:19:45.012321 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 16 00:19:45.012328 kernel: PCI: Using configuration type 1 for base access May 16 00:19:45.012336 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:19:45.012343 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:19:45.012353 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:19:45.012361 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:19:45.012368 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:19:45.012376 kernel: ACPI: Added _OSI(Module Device) May 16 00:19:45.012383 kernel: ACPI: Added _OSI(Processor Device) May 16 00:19:45.012390 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:19:45.012398 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:19:45.012405 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:19:45.012413 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:19:45.012422 kernel: ACPI: Interpreter enabled May 16 00:19:45.012430 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:19:45.012437 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:19:45.012445 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:19:45.012452 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:19:45.012459 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:19:45.012467 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:19:45.012670 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:19:45.012829 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:19:45.012975 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:19:45.012989 kernel: PCI host bridge to bus 0000:00 May 16 00:19:45.013118 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:19:45.013241 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:19:45.013351 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:19:45.013457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 00:19:45.013571 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 00:19:45.013680 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 00:19:45.013806 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:19:45.013977 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:19:45.014112 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:19:45.014244 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 16 00:19:45.014370 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 16 00:19:45.014561 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 16 00:19:45.014714 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 16 00:19:45.014836 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:19:45.014997 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:19:45.015127 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 16 00:19:45.015262 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 16 00:19:45.015390 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 16 00:19:45.015556 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:19:45.015772 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 16 00:19:45.015925 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 16 00:19:45.016078 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 16 00:19:45.016254 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:19:45.016408 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 16 00:19:45.016566 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 16 00:19:45.016736 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 00:19:45.016880 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 16 00:19:45.017033 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:19:45.017180 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:19:45.017357 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:19:45.017520 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 16 00:19:45.017681 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 16 00:19:45.017872 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:19:45.018029 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 16 00:19:45.018048 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:19:45.018061 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:19:45.018073 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:19:45.018084 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:19:45.018100 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:19:45.018110 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:19:45.018120 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:19:45.018131 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:19:45.018142 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:19:45.018152 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:19:45.018163 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:19:45.018173 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:19:45.018183 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:19:45.018197 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:19:45.018220 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:19:45.018347 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:19:45.018357 kernel: iommu: Default domain type: Translated May 16 00:19:45.018368 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:19:45.018378 kernel: efivars: Registered efivars operations May 16 00:19:45.018388 kernel: PCI: Using ACPI for IRQ routing May 16 00:19:45.018399 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:19:45.018410 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 00:19:45.018423 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 00:19:45.018433 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 16 00:19:45.018443 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 16 00:19:45.018454 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 00:19:45.018465 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 00:19:45.018475 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 16 00:19:45.018485 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 00:19:45.018673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:19:45.018872 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:19:45.019039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:19:45.019055 kernel: vgaarb: loaded May 16 00:19:45.019065 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:19:45.019077 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:19:45.019087 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:19:45.019097 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:19:45.019108 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:19:45.019119 kernel: pnp: PnP ACPI init May 16 00:19:45.019311 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 00:19:45.019330 kernel: pnp: PnP ACPI: found 6 devices May 16 00:19:45.019340 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:19:45.019351 kernel: NET: Registered PF_INET protocol family May 16 00:19:45.019362 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:19:45.019397 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:19:45.019411 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:19:45.019422 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:19:45.019435 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:19:45.019446 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:19:45.019457 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:19:45.019468 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:19:45.019478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:19:45.019489 kernel: NET: Registered PF_XDP protocol family May 16 00:19:45.019655 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 16 00:19:45.019839 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 16 00:19:45.020003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:19:45.020156 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:19:45.020316 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:19:45.020466 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 00:19:45.020614 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 00:19:45.020863 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 00:19:45.020880 kernel: PCI: CLS 0 bytes, default 64 May 16 00:19:45.020892 kernel: Initialise system trusted keyrings May 16 00:19:45.020903 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:19:45.020919 kernel: Key type asymmetric registered May 16 00:19:45.020930 kernel: Asymmetric key parser 'x509' registered May 16 00:19:45.020941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:19:45.020952 kernel: io scheduler mq-deadline registered May 16 00:19:45.020963 kernel: io scheduler kyber registered May 16 00:19:45.020974 kernel: io scheduler bfq registered May 16 00:19:45.020985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:19:45.020997 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:19:45.021008 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:19:45.021023 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:19:45.021038 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:19:45.021050 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:19:45.021064 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:19:45.021075 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:19:45.021086 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:19:45.021260 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:19:45.021278 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:19:45.021419 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:19:45.021560 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:19:44 UTC (1747354784) May 16 00:19:45.021726 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 00:19:45.021743 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:19:45.021755 kernel: efifb: probing for efifb May 16 00:19:45.021770 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 00:19:45.021782 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 00:19:45.021793 kernel: efifb: scrolling: redraw May 16 00:19:45.021804 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:19:45.021815 kernel: Console: switching to colour frame buffer device 160x50 May 16 00:19:45.021826 kernel: fb0: EFI VGA frame buffer device May 16 00:19:45.021837 kernel: pstore: Using crash dump compression: deflate May 16 00:19:45.021848 kernel: pstore: Registered efi_pstore as persistent store backend May 16 00:19:45.021859 kernel: NET: Registered PF_INET6 protocol family May 16 00:19:45.021870 kernel: Segment Routing with IPv6 May 16 00:19:45.021883 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:19:45.021894 kernel: NET: Registered PF_PACKET protocol family May 16 00:19:45.021905 kernel: Key type dns_resolver registered May 16 00:19:45.021916 kernel: IPI shorthand broadcast: enabled May 16 00:19:45.021926 kernel: sched_clock: Marking stable (694003996, 184505473)->(901994774, -23485305) May 16 00:19:45.021937 kernel: registered taskstats version 1 May 16 00:19:45.021948 kernel: Loading compiled-in X.509 certificates May 16 00:19:45.021958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 16 00:19:45.021969 kernel: Key type .fscrypt registered May 16 00:19:45.021983 kernel: Key type fscrypt-provisioning registered May 16 00:19:45.021994 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:19:45.022005 kernel: ima: Allocated hash algorithm: sha1 May 16 00:19:45.022016 kernel: ima: No architecture policies found May 16 00:19:45.022027 kernel: clk: Disabling unused clocks May 16 00:19:45.022039 kernel: Freeing unused kernel image (initmem) memory: 42988K May 16 00:19:45.022050 kernel: Write protecting the kernel read-only data: 36864k May 16 00:19:45.022062 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 16 00:19:45.022076 kernel: Run /init as init process May 16 00:19:45.022087 kernel: with arguments: May 16 00:19:45.022098 kernel: /init May 16 00:19:45.022109 kernel: with environment: May 16 00:19:45.022121 kernel: HOME=/ May 16 00:19:45.022131 kernel: TERM=linux May 16 00:19:45.022142 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:19:45.022156 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:19:45.022174 systemd[1]: Detected virtualization kvm. May 16 00:19:45.022186 systemd[1]: Detected architecture x86-64. May 16 00:19:45.022198 systemd[1]: Running in initrd. May 16 00:19:45.022220 systemd[1]: No hostname configured, using default hostname. May 16 00:19:45.022231 systemd[1]: Hostname set to . May 16 00:19:45.022244 systemd[1]: Initializing machine ID from VM UUID. May 16 00:19:45.022256 systemd[1]: Queued start job for default target initrd.target. May 16 00:19:45.022268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:45.022283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:45.022297 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:19:45.022309 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:45.022321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:19:45.022334 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:19:45.022348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:19:45.022360 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:19:45.022391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:45.022404 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:45.022415 systemd[1]: Reached target paths.target - Path Units. May 16 00:19:45.022444 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:45.022465 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:45.022478 systemd[1]: Reached target timers.target - Timer Units. May 16 00:19:45.022506 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:45.022527 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:45.022562 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:19:45.022594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:19:45.022608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:45.022620 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:45.022632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:45.022644 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:19:45.022657 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:19:45.022669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:45.022682 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:19:45.022800 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:19:45.022813 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:45.022825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:45.022838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.022849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:19:45.022861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:45.022872 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:19:45.022888 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:19:45.022933 systemd-journald[194]: Collecting audit messages is disabled. May 16 00:19:45.022964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.022976 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:45.022988 systemd-journald[194]: Journal started May 16 00:19:45.023011 systemd-journald[194]: Runtime Journal (/run/log/journal/486a00b44de04558ac63e481bd8098c2) is 6.0M, max 48.3M, 42.2M free. May 16 00:19:45.027147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:45.024572 systemd-modules-load[195]: Inserted module 'overlay' May 16 00:19:45.031726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:45.034707 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:45.037437 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:45.043161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:45.044828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:19:45.048709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:45.058323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:45.065799 dracut-cmdline[220]: dracut-dracut-053 May 16 00:19:45.069610 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:45.081712 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:19:45.083665 systemd-modules-load[195]: Inserted module 'br_netfilter' May 16 00:19:45.084600 kernel: Bridge firewalling registered May 16 00:19:45.086104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:45.092883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:45.103131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:45.105289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:45.155768 systemd-resolved[266]: Positive Trust Anchors: May 16 00:19:45.155788 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:45.155825 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:45.158822 systemd-resolved[266]: Defaulting to hostname 'linux'. May 16 00:19:45.166459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:45.170116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:45.173350 kernel: SCSI subsystem initialized May 16 00:19:45.181747 kernel: Loading iSCSI transport class v2.0-870. May 16 00:19:45.193752 kernel: iscsi: registered transport (tcp) May 16 00:19:45.214989 kernel: iscsi: registered transport (qla4xxx) May 16 00:19:45.215071 kernel: QLogic iSCSI HBA Driver May 16 00:19:45.273234 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:19:45.287912 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:19:45.316969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:19:45.317058 kernel: device-mapper: uevent: version 1.0.3 May 16 00:19:45.317076 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:19:45.366753 kernel: raid6: avx2x4 gen() 22558 MB/s May 16 00:19:45.383742 kernel: raid6: avx2x2 gen() 23441 MB/s May 16 00:19:45.401043 kernel: raid6: avx2x1 gen() 18987 MB/s May 16 00:19:45.401129 kernel: raid6: using algorithm avx2x2 gen() 23441 MB/s May 16 00:19:45.419113 kernel: raid6: .... xor() 15615 MB/s, rmw enabled May 16 00:19:45.419222 kernel: raid6: using avx2x2 recovery algorithm May 16 00:19:45.441748 kernel: xor: automatically using best checksumming function avx May 16 00:19:45.638729 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:19:45.654790 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:45.662952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:45.679145 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 16 00:19:45.683954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:45.698881 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:19:45.713686 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 16 00:19:45.753954 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:45.764872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:45.835968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:45.846895 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:19:45.858570 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:19:45.861137 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:45.861733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:45.862120 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:45.873003 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:19:45.888702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:45.896718 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:19:45.897728 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:19:45.905721 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:19:45.911717 kernel: libata version 3.00 loaded. May 16 00:19:45.913452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:45.913583 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:45.915535 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:45.929910 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:19:45.930652 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:19:45.930934 kernel: AES CTR mode by8 optimization enabled May 16 00:19:45.930952 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:19:45.930969 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:19:45.931166 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:19:45.931380 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:19:45.920097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:45.943808 kernel: GPT:9289727 != 19775487 May 16 00:19:45.943844 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:19:45.943865 kernel: GPT:9289727 != 19775487 May 16 00:19:45.943880 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:19:45.943896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:45.943911 kernel: scsi host0: ahci May 16 00:19:45.944149 kernel: scsi host1: ahci May 16 00:19:45.944374 kernel: scsi host2: ahci May 16 00:19:45.944562 kernel: scsi host3: ahci May 16 00:19:45.944797 kernel: scsi host4: ahci May 16 00:19:45.944997 kernel: scsi host5: ahci May 16 00:19:45.945205 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 May 16 00:19:45.945223 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 May 16 00:19:45.945238 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 May 16 00:19:45.920299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.950990 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 May 16 00:19:45.951019 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 May 16 00:19:45.951035 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 May 16 00:19:45.922421 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.934576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.963068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.967729 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) May 16 00:19:45.968188 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (474) May 16 00:19:45.991602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:19:45.998603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:19:46.011219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:19:46.016931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:19:46.017249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:19:46.037877 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:19:46.038231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:46.038303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:46.041564 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:46.045968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:46.061886 disk-uuid[558]: Primary Header is updated. May 16 00:19:46.061886 disk-uuid[558]: Secondary Entries is updated. May 16 00:19:46.061886 disk-uuid[558]: Secondary Header is updated. May 16 00:19:46.066743 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:46.068748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:46.074710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:46.080722 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:46.110391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:46.258892 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.258987 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.259728 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.260724 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:19:46.261739 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.262729 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.263734 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:19:46.264901 kernel: ata3.00: applying bridge limits May 16 00:19:46.265726 kernel: ata3.00: configured for UDMA/100 May 16 00:19:46.267724 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:19:46.323318 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:19:46.323757 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:19:46.335733 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:19:47.093717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:47.093976 disk-uuid[560]: The operation has completed successfully. May 16 00:19:47.125575 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:19:47.125746 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:19:47.149909 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:19:47.166762 sh[598]: Success May 16 00:19:47.178727 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:19:47.210994 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:19:47.220221 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:19:47.222558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:19:47.264343 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 16 00:19:47.264391 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:47.264414 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:19:47.265403 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:19:47.266165 kernel: BTRFS info (device dm-0): using free space tree May 16 00:19:47.271508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:19:47.288324 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:19:47.302863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:19:47.304563 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:19:47.314812 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:47.314855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:47.314866 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:47.318737 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:47.327096 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:19:47.344024 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:47.404865 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:47.416836 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:47.438553 systemd-networkd[776]: lo: Link UP May 16 00:19:47.438564 systemd-networkd[776]: lo: Gained carrier May 16 00:19:47.440442 systemd-networkd[776]: Enumeration completed May 16 00:19:47.440923 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:47.440928 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:19:47.442225 systemd-networkd[776]: eth0: Link UP May 16 00:19:47.442229 systemd-networkd[776]: eth0: Gained carrier May 16 00:19:47.442237 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:47.453090 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:47.453638 systemd[1]: Reached target network.target - Network. May 16 00:19:47.477736 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:19:47.572979 systemd-resolved[266]: Detected conflict on linux IN A 10.0.0.15 May 16 00:19:47.573000 systemd-resolved[266]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 16 00:19:47.616352 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:19:47.625851 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:19:47.675229 ignition[781]: Ignition 2.20.0 May 16 00:19:47.675242 ignition[781]: Stage: fetch-offline May 16 00:19:47.675289 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.675298 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.675407 ignition[781]: parsed url from cmdline: "" May 16 00:19:47.675412 ignition[781]: no config URL provided May 16 00:19:47.675419 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:19:47.675437 ignition[781]: no config at "/usr/lib/ignition/user.ign" May 16 00:19:47.675470 ignition[781]: op(1): [started] loading QEMU firmware config module May 16 00:19:47.675475 ignition[781]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:19:47.684106 ignition[781]: op(1): [finished] loading QEMU firmware config module May 16 00:19:47.728962 ignition[781]: parsing config with SHA512: 5f624fa8913f81013ae4f295d412329f00fb0d3f726b028a8f3084c17831d9d4adf77a078d5a6dd952772923d8a45419ee0ad6f4a69fc59d0fd48f35dc0ead81 May 16 00:19:47.733393 unknown[781]: fetched base config from "system" May 16 00:19:47.733548 unknown[781]: fetched user config from "qemu" May 16 00:19:47.752731 ignition[781]: fetch-offline: fetch-offline passed May 16 00:19:47.752866 ignition[781]: Ignition finished successfully May 16 00:19:47.757216 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:47.760236 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:19:47.774898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:19:47.791210 ignition[792]: Ignition 2.20.0 May 16 00:19:47.791222 ignition[792]: Stage: kargs May 16 00:19:47.791419 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.791433 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.792401 ignition[792]: kargs: kargs passed May 16 00:19:47.792452 ignition[792]: Ignition finished successfully May 16 00:19:47.800912 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:19:47.811957 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:19:47.823724 ignition[801]: Ignition 2.20.0 May 16 00:19:47.823735 ignition[801]: Stage: disks May 16 00:19:47.823903 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.823913 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.824681 ignition[801]: disks: disks passed May 16 00:19:47.824738 ignition[801]: Ignition finished successfully May 16 00:19:47.871475 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:19:47.874153 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:19:47.874456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:19:47.877130 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:47.880292 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:19:47.880967 systemd[1]: Reached target basic.target - Basic System. May 16 00:19:47.897897 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:19:47.926162 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:19:48.287371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:19:48.303968 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:19:48.473737 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 16 00:19:48.474548 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:19:48.475835 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:19:48.486800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:48.489253 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:19:48.522175 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:19:48.522222 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:19:48.522248 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:48.525745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:19:48.528499 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:19:48.558723 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) May 16 00:19:48.561333 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:48.561363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:48.561375 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:48.564713 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:48.566730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:48.584728 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:19:48.588529 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory May 16 00:19:48.593167 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:19:48.596408 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:19:48.643901 systemd-networkd[776]: eth0: Gained IPv6LL May 16 00:19:48.690504 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:19:48.698823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:19:48.701944 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:19:48.710553 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:19:48.712306 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:48.731411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:19:48.842729 ignition[938]: INFO : Ignition 2.20.0 May 16 00:19:48.842729 ignition[938]: INFO : Stage: mount May 16 00:19:48.844621 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:48.844621 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:48.844621 ignition[938]: INFO : mount: mount passed May 16 00:19:48.844621 ignition[938]: INFO : Ignition finished successfully May 16 00:19:48.846464 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:19:48.857900 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:19:49.483969 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:49.491732 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (949) May 16 00:19:49.494494 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:49.494524 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:49.494539 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:49.499712 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:49.500738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:49.521393 ignition[966]: INFO : Ignition 2.20.0 May 16 00:19:49.521393 ignition[966]: INFO : Stage: files May 16 00:19:49.523540 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:49.523540 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:49.523540 ignition[966]: DEBUG : files: compiled without relabeling support, skipping May 16 00:19:49.527384 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:19:49.527384 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:19:49.530150 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:19:49.531488 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:19:49.533402 unknown[966]: wrote ssh authorized keys file for user: core May 16 00:19:49.534780 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:19:49.537710 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:19:49.540016 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 00:19:49.582543 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:19:49.702646 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:19:49.702646 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:19:49.706884 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 00:19:50.554262 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 00:19:51.134779 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:19:51.134779 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 00:19:51.139357 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:51.141945 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:51.141945 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 00:19:51.141945 ignition[966]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 00:19:51.146888 ignition[966]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:51.148852 ignition[966]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:51.148852 ignition[966]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 00:19:51.152243 ignition[966]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:19:51.185211 ignition[966]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:51.208884 ignition[966]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:51.208884 ignition[966]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:19:51.208884 ignition[966]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 00:19:51.208884 ignition[966]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:19:51.208884 ignition[966]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:51.208884 ignition[966]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:51.208884 ignition[966]: INFO : files: files passed May 16 00:19:51.208884 ignition[966]: INFO : Ignition finished successfully May 16 00:19:51.244923 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:19:51.257943 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:19:51.260057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:19:51.262204 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:19:51.262314 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:19:51.270787 initrd-setup-root-after-ignition[995]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:19:51.273564 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.273564 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.277221 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.278518 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:51.302431 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:19:51.310981 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:19:51.336556 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:19:51.336706 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:19:51.339684 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:19:51.341705 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:19:51.342965 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:19:51.352914 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:19:51.366185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:51.369164 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:19:51.383508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:51.400642 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:51.403333 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:19:51.405669 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:19:51.405847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:51.408393 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:19:51.410296 systemd[1]: Stopped target basic.target - Basic System. May 16 00:19:51.412890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:19:51.415328 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:51.417770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:19:51.439950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:19:51.442363 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:51.444967 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:19:51.447277 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:19:51.449867 systemd[1]: Stopped target swap.target - Swaps. May 16 00:19:51.451970 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:19:51.452152 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:51.454613 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:51.456487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:51.458978 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:19:51.459146 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:51.461634 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:19:51.461812 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:19:51.464423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:19:51.464540 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:51.466798 systemd[1]: Stopped target paths.target - Path Units. May 16 00:19:51.468794 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:19:51.468922 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:51.471801 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:19:51.473924 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:19:51.476126 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:19:51.476226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:51.489171 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:19:51.489309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:51.491561 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:19:51.491747 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:51.493915 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:19:51.494069 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:19:51.507957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:19:51.509495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:19:51.509629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:51.512965 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:19:51.513998 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:19:51.514139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:51.516999 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:19:51.517211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:51.528512 ignition[1021]: INFO : Ignition 2.20.0 May 16 00:19:51.528512 ignition[1021]: INFO : Stage: umount May 16 00:19:51.528512 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:51.528512 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:51.528512 ignition[1021]: INFO : umount: umount passed May 16 00:19:51.528512 ignition[1021]: INFO : Ignition finished successfully May 16 00:19:51.522807 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:19:51.522940 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:19:51.526618 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:19:51.526768 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:19:51.529086 systemd[1]: Stopped target network.target - Network. May 16 00:19:51.530429 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:19:51.530495 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:19:51.532450 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:19:51.532497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:19:51.534662 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:19:51.534726 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:19:51.537271 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:19:51.537335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:19:51.539727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:19:51.542049 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:19:51.548750 systemd-networkd[776]: eth0: DHCPv6 lease lost May 16 00:19:51.551101 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:19:51.551872 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:19:51.552068 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:19:51.554748 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:19:51.554948 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:19:51.558656 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:19:51.558747 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:51.597977 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:19:51.611133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:19:51.611257 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:51.614059 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:19:51.614110 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:51.616640 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:19:51.616707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:19:51.617211 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:19:51.617257 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:51.617762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:51.627431 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:19:51.627646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:19:51.640054 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:19:51.640299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:51.649283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:19:51.649350 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:19:51.651613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:19:51.651655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:51.654099 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:19:51.654154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:51.656931 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:19:51.657005 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:19:51.659168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:51.659230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:51.670996 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:19:51.697320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:19:51.698660 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:51.701880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:51.703228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:51.706386 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:19:51.707747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:19:52.101461 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:19:52.102618 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:19:52.105396 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:19:52.107621 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:19:52.108813 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:19:52.126992 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:19:52.138883 systemd[1]: Switching root. May 16 00:19:52.169303 systemd-journald[194]: Journal stopped May 16 00:19:54.182993 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 16 00:19:54.183060 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:19:54.183079 kernel: SELinux: policy capability open_perms=1 May 16 00:19:54.183091 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:19:54.183102 kernel: SELinux: policy capability always_check_network=0 May 16 00:19:54.183114 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:19:54.183134 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:19:54.183148 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:19:54.183159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:19:54.183175 kernel: audit: type=1403 audit(1747354793.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:19:54.183190 systemd[1]: Successfully loaded SELinux policy in 45.699ms. May 16 00:19:54.183209 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.820ms. May 16 00:19:54.183223 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:19:54.183235 systemd[1]: Detected virtualization kvm. May 16 00:19:54.183247 systemd[1]: Detected architecture x86-64. May 16 00:19:54.183259 systemd[1]: Detected first boot. May 16 00:19:54.183276 systemd[1]: Initializing machine ID from VM UUID. May 16 00:19:54.183288 zram_generator::config[1066]: No configuration found. May 16 00:19:54.183302 systemd[1]: Populated /etc with preset unit settings. May 16 00:19:54.183318 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:19:54.183330 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:19:54.183347 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:19:54.183359 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:19:54.183372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:19:54.183386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:19:54.183398 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:19:54.183411 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:19:54.183423 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:19:54.183435 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:19:54.183447 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:19:54.183459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:54.183471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:54.183484 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:19:54.183498 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:19:54.183510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:19:54.183523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:54.183535 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:19:54.183548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:54.183560 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:19:54.183572 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:19:54.183584 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:19:54.183599 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:19:54.183611 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:54.183624 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:54.183636 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:54.183648 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:54.183660 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:19:54.183671 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:19:54.184534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:54.184557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:54.184573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:54.184585 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:19:54.184597 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:19:54.184609 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:19:54.184621 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:19:54.184636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:54.184652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:19:54.184666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:19:54.184682 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:19:54.184707 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:19:54.184720 systemd[1]: Reached target machines.target - Containers. May 16 00:19:54.184732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:19:54.184744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:54.184757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:54.184769 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:19:54.184781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:54.184793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:54.184808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:54.184821 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:19:54.184834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:54.184847 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:19:54.184859 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:19:54.184871 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:19:54.184883 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:19:54.184895 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:19:54.184923 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:54.184939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:54.184951 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:19:54.184982 systemd-journald[1129]: Collecting audit messages is disabled. May 16 00:19:54.185009 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:19:54.185024 systemd-journald[1129]: Journal started May 16 00:19:54.185046 systemd-journald[1129]: Runtime Journal (/run/log/journal/486a00b44de04558ac63e481bd8098c2) is 6.0M, max 48.3M, 42.2M free. May 16 00:19:53.952402 systemd[1]: Queued start job for default target multi-user.target. May 16 00:19:53.973166 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:19:53.973718 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:19:54.199727 kernel: loop: module loaded May 16 00:19:54.202726 kernel: fuse: init (API version 7.39) May 16 00:19:54.202831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:54.205161 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:19:54.205215 systemd[1]: Stopped verity-setup.service. May 16 00:19:54.208309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:54.214437 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:54.215879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:19:54.217449 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:19:54.218863 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:19:54.220127 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:19:54.221525 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:19:54.224908 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:19:54.226603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:54.228589 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:19:54.228997 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:19:54.230929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:54.231254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:54.233321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:54.233704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:54.235776 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:19:54.236027 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:19:54.237959 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:54.238311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:54.240401 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:19:54.242320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:54.244123 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:19:54.246172 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:19:54.266584 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:19:54.266787 kernel: ACPI: bus type drm_connector registered May 16 00:19:54.275836 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:19:54.278576 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:19:54.279869 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:19:54.279921 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:54.282375 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 00:19:54.284995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:19:54.296075 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:19:54.297832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:54.302322 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:19:54.305179 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:19:54.306822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:54.311894 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:19:54.313467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:54.317055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:54.321144 systemd-journald[1129]: Time spent on flushing to /var/log/journal/486a00b44de04558ac63e481bd8098c2 is 29.505ms for 1040 entries. May 16 00:19:54.321144 systemd-journald[1129]: System Journal (/var/log/journal/486a00b44de04558ac63e481bd8098c2) is 8.0M, max 195.6M, 187.6M free. May 16 00:19:54.409954 systemd-journald[1129]: Received client request to flush runtime journal. May 16 00:19:54.323924 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:19:54.382515 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:19:54.386212 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:54.386435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:54.388177 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:54.389883 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:19:54.391455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:19:54.393232 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:19:54.395346 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:19:54.426150 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:19:54.431198 kernel: loop0: detected capacity change from 0 to 140992 May 16 00:19:54.431617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:19:54.443125 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 00:19:54.448857 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:19:54.451831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:54.487924 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:19:54.517798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:19:54.520725 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:19:54.579167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:54.594720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:19:54.597292 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 00:19:54.598757 kernel: loop1: detected capacity change from 0 to 224512 May 16 00:19:54.618186 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 16 00:19:54.618736 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 16 00:19:54.627048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:54.649738 kernel: loop2: detected capacity change from 0 to 138184 May 16 00:19:54.693748 kernel: loop3: detected capacity change from 0 to 140992 May 16 00:19:54.755785 kernel: loop4: detected capacity change from 0 to 224512 May 16 00:19:54.774729 kernel: loop5: detected capacity change from 0 to 138184 May 16 00:19:54.825927 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:19:54.826792 (sd-merge)[1205]: Merged extensions into '/usr'. May 16 00:19:54.846589 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:19:54.846612 systemd[1]: Reloading... May 16 00:19:54.902752 zram_generator::config[1230]: No configuration found. May 16 00:19:54.958726 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:19:55.072219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:55.136554 systemd[1]: Reloading finished in 289 ms. May 16 00:19:55.181742 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:19:55.183583 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:19:55.197101 systemd[1]: Starting ensure-sysext.service... May 16 00:19:55.200172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:55.213206 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 16 00:19:55.213231 systemd[1]: Reloading... May 16 00:19:55.241761 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:19:55.242164 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:19:55.243272 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:19:55.243834 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 00:19:55.244020 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 16 00:19:55.247671 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:55.247828 systemd-tmpfiles[1269]: Skipping /boot May 16 00:19:55.271618 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:55.271639 systemd-tmpfiles[1269]: Skipping /boot May 16 00:19:55.314714 zram_generator::config[1296]: No configuration found. May 16 00:19:55.429221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:55.480263 systemd[1]: Reloading finished in 266 ms. May 16 00:19:55.540944 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:19:55.572633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:55.583761 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:19:55.587011 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:19:55.590092 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:19:55.596898 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:55.603056 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:55.607180 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:19:55.611622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.611894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.614146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.619948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.624039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.626178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.629967 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:19:55.631273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.632817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.633095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.635875 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:19:55.637944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.638132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.640103 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.640338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.651321 systemd-udevd[1340]: Using default interface naming scheme 'v255'. May 16 00:19:55.653625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.653952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.663677 augenrules[1369]: No rules May 16 00:19:55.667497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.671247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.674411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.675774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.678011 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:19:55.679469 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.681252 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:19:55.681548 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:19:55.683421 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:19:55.685616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.685994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.688281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.688473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.698395 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:19:55.700446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:19:55.702372 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.702607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.712866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:55.715466 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:19:55.734997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.747170 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:19:55.758920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.762902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.766082 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:55.774838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.777613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.779999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.807327 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:55.817774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:19:55.817832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.818789 systemd[1]: Finished ensure-sysext.service. May 16 00:19:55.819665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.819879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.820470 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:55.820644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:55.823981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1388) May 16 00:19:55.824252 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.824483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.829987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.830510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.836362 augenrules[1403]: /sbin/augenrules: No change May 16 00:19:55.893170 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:19:55.898176 augenrules[1439]: No rules May 16 00:19:55.898559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:55.898723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:55.913722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:19:55.937535 systemd-resolved[1338]: Positive Trust Anchors: May 16 00:19:55.937563 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:55.937596 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:55.952237 systemd-resolved[1338]: Defaulting to hostname 'linux'. May 16 00:19:55.957340 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:19:55.959289 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:19:55.959573 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:19:55.972006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:55.974664 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:55.982216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:19:56.000117 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:19:56.013197 systemd-networkd[1419]: lo: Link UP May 16 00:19:56.013211 systemd-networkd[1419]: lo: Gained carrier May 16 00:19:56.016958 systemd-networkd[1419]: Enumeration completed May 16 00:19:56.017774 kernel: ACPI: button: Power Button [PWRF] May 16 00:19:56.017732 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:56.019413 systemd[1]: Reached target network.target - Network. May 16 00:19:56.021554 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:56.021569 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:19:56.023606 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:56.023652 systemd-networkd[1419]: eth0: Link UP May 16 00:19:56.023657 systemd-networkd[1419]: eth0: Gained carrier May 16 00:19:56.023670 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:56.026734 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:19:56.028870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:19:56.038785 systemd-networkd[1419]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:19:56.047120 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:19:56.063419 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 00:19:56.066296 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:19:56.066533 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:19:56.066795 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:19:56.144344 systemd-timesyncd[1447]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:19:56.144404 systemd-timesyncd[1447]: Initial clock synchronization to Fri 2025-05-16 00:19:56.524593 UTC. May 16 00:19:56.146106 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:19:56.159653 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:19:56.177095 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:19:56.183629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:56.191258 kernel: kvm_amd: TSC scaling supported May 16 00:19:56.191351 kernel: kvm_amd: Nested Virtualization enabled May 16 00:19:56.191370 kernel: kvm_amd: Nested Paging enabled May 16 00:19:56.193043 kernel: kvm_amd: LBR virtualization supported May 16 00:19:56.193086 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:19:56.193103 kernel: kvm_amd: Virtual GIF supported May 16 00:19:56.219740 kernel: EDAC MC: Ver: 3.0.0 May 16 00:19:56.253217 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:19:56.271199 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:19:56.282286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:56.284897 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:19:56.325017 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:19:56.326860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:56.328157 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:19:56.329810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:19:56.331290 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:19:56.333112 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:19:56.334500 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:19:56.335934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:19:56.337562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:19:56.337619 systemd[1]: Reached target paths.target - Path Units. May 16 00:19:56.338682 systemd[1]: Reached target timers.target - Timer Units. May 16 00:19:56.342809 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:19:56.346220 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:19:56.353533 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:19:56.358475 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:19:56.360457 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:19:56.361863 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:19:56.362958 systemd[1]: Reached target basic.target - Basic System. May 16 00:19:56.363460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:19:56.363492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:19:56.373956 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:19:56.377659 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:19:56.380324 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:19:56.382774 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:19:56.384128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:19:56.385306 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:19:56.387896 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:19:56.393160 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:19:56.401076 jq[1475]: false May 16 00:19:56.409019 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:19:56.412817 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:19:56.416384 extend-filesystems[1476]: Found loop3 May 16 00:19:56.416384 extend-filesystems[1476]: Found loop4 May 16 00:19:56.416384 extend-filesystems[1476]: Found loop5 May 16 00:19:56.416384 extend-filesystems[1476]: Found sr0 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda May 16 00:19:56.416384 extend-filesystems[1476]: Found vda1 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda2 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda3 May 16 00:19:56.416384 extend-filesystems[1476]: Found usr May 16 00:19:56.416384 extend-filesystems[1476]: Found vda4 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda6 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda7 May 16 00:19:56.416384 extend-filesystems[1476]: Found vda9 May 16 00:19:56.416384 extend-filesystems[1476]: Checking size of /dev/vda9 May 16 00:19:56.432738 dbus-daemon[1474]: [system] SELinux support is enabled May 16 00:19:56.425187 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:19:56.429377 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:19:56.430087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:19:56.433046 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:19:56.435488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:19:56.437920 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:19:56.449107 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:19:56.452123 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:19:56.452441 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:19:56.452926 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:19:56.453760 jq[1492]: true May 16 00:19:56.453213 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:19:56.454601 extend-filesystems[1476]: Resized partition /dev/vda9 May 16 00:19:56.459550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:19:56.459901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:19:56.465726 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1388) May 16 00:19:56.471265 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) May 16 00:19:56.482574 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:19:56.491121 jq[1500]: true May 16 00:19:56.516078 update_engine[1491]: I20250516 00:19:56.514065 1491 main.cc:92] Flatcar Update Engine starting May 16 00:19:56.519947 update_engine[1491]: I20250516 00:19:56.517938 1491 update_check_scheduler.cc:74] Next update check in 5m39s May 16 00:19:56.520051 tar[1499]: linux-amd64/LICENSE May 16 00:19:56.520051 tar[1499]: linux-amd64/helm May 16 00:19:56.522320 systemd[1]: Started update-engine.service - Update Engine. May 16 00:19:56.524100 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:19:56.524142 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:19:56.526320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:19:56.526349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:19:56.530221 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:19:56.533118 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:19:56.535941 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:19:56.590142 systemd-logind[1486]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:19:56.590168 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:19:56.594741 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:19:56.594741 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:19:56.594741 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:19:56.606653 extend-filesystems[1476]: Resized filesystem in /dev/vda9 May 16 00:19:56.595343 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:19:56.595861 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:19:56.596191 systemd-logind[1486]: New seat seat0. May 16 00:19:56.607908 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:19:56.614955 bash[1528]: Updated "/home/core/.ssh/authorized_keys" May 16 00:19:56.617374 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:19:56.620214 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:19:56.627594 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:19:56.830197 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:19:56.872665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:19:56.896234 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:19:56.915829 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:19:56.916137 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:19:56.956292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:19:57.015445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:19:57.062436 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:19:57.071925 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:19:57.077031 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:19:57.116531 containerd[1507]: time="2025-05-16T00:19:57.116163399Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:19:57.158042 containerd[1507]: time="2025-05-16T00:19:57.157936645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.165031 containerd[1507]: time="2025-05-16T00:19:57.164953634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.165031 containerd[1507]: time="2025-05-16T00:19:57.165007611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:19:57.165031 containerd[1507]: time="2025-05-16T00:19:57.165028600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:19:57.165357 containerd[1507]: time="2025-05-16T00:19:57.165264657Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:19:57.165357 containerd[1507]: time="2025-05-16T00:19:57.165301495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.165422 containerd[1507]: time="2025-05-16T00:19:57.165385760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.165422 containerd[1507]: time="2025-05-16T00:19:57.165408828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.165730 containerd[1507]: time="2025-05-16T00:19:57.165690937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.165730 containerd[1507]: time="2025-05-16T00:19:57.165719526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.165808 containerd[1507]: time="2025-05-16T00:19:57.165769724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.165808 containerd[1507]: time="2025-05-16T00:19:57.165784680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.165978 containerd[1507]: time="2025-05-16T00:19:57.165947868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.166339 containerd[1507]: time="2025-05-16T00:19:57.166286284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.166526 containerd[1507]: time="2025-05-16T00:19:57.166490036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.166526 containerd[1507]: time="2025-05-16T00:19:57.166516883Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:19:57.166675 containerd[1507]: time="2025-05-16T00:19:57.166644105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:19:57.166758 containerd[1507]: time="2025-05-16T00:19:57.166720320Z" level=info msg="metadata content store policy set" policy=shared May 16 00:19:57.295020 tar[1499]: linux-amd64/README.md May 16 00:19:57.315904 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:19:57.503126 containerd[1507]: time="2025-05-16T00:19:57.502903247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:19:57.503126 containerd[1507]: time="2025-05-16T00:19:57.503013970Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:19:57.503126 containerd[1507]: time="2025-05-16T00:19:57.503054765Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:19:57.503126 containerd[1507]: time="2025-05-16T00:19:57.503079901Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:19:57.503126 containerd[1507]: time="2025-05-16T00:19:57.503098309Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:19:57.503461 containerd[1507]: time="2025-05-16T00:19:57.503416112Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:19:57.503855 containerd[1507]: time="2025-05-16T00:19:57.503820447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:19:57.504003 containerd[1507]: time="2025-05-16T00:19:57.503969961Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:19:57.504003 containerd[1507]: time="2025-05-16T00:19:57.503997122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:19:57.504043 containerd[1507]: time="2025-05-16T00:19:57.504016297Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:19:57.504043 containerd[1507]: time="2025-05-16T00:19:57.504033267Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504081 containerd[1507]: time="2025-05-16T00:19:57.504049618Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504081 containerd[1507]: time="2025-05-16T00:19:57.504067345Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504133 containerd[1507]: time="2025-05-16T00:19:57.504092313Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504133 containerd[1507]: time="2025-05-16T00:19:57.504111928Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504173 containerd[1507]: time="2025-05-16T00:19:57.504130946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504173 containerd[1507]: time="2025-05-16T00:19:57.504149785Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504173 containerd[1507]: time="2025-05-16T00:19:57.504165306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:19:57.504225 containerd[1507]: time="2025-05-16T00:19:57.504190663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504225 containerd[1507]: time="2025-05-16T00:19:57.504209879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504274 containerd[1507]: time="2025-05-16T00:19:57.504225559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504274 containerd[1507]: time="2025-05-16T00:19:57.504243044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504274 containerd[1507]: time="2025-05-16T00:19:57.504258094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504333 containerd[1507]: time="2025-05-16T00:19:57.504274603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504333 containerd[1507]: time="2025-05-16T00:19:57.504290272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504333 containerd[1507]: time="2025-05-16T00:19:57.504306099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504333 containerd[1507]: time="2025-05-16T00:19:57.504322335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504403 containerd[1507]: time="2025-05-16T00:19:57.504341237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504403 containerd[1507]: time="2025-05-16T00:19:57.504357115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504403 containerd[1507]: time="2025-05-16T00:19:57.504372523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504403 containerd[1507]: time="2025-05-16T00:19:57.504387530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504486 containerd[1507]: time="2025-05-16T00:19:57.504405361Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:19:57.504486 containerd[1507]: time="2025-05-16T00:19:57.504429784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504486 containerd[1507]: time="2025-05-16T00:19:57.504447205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504486 containerd[1507]: time="2025-05-16T00:19:57.504462329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:19:57.504556 containerd[1507]: time="2025-05-16T00:19:57.504521448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:19:57.504556 containerd[1507]: time="2025-05-16T00:19:57.504546164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:19:57.504612 containerd[1507]: time="2025-05-16T00:19:57.504561015Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:19:57.504612 containerd[1507]: time="2025-05-16T00:19:57.504577817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:19:57.504612 containerd[1507]: time="2025-05-16T00:19:57.504593781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:19:57.504693 containerd[1507]: time="2025-05-16T00:19:57.504615967Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:19:57.504693 containerd[1507]: time="2025-05-16T00:19:57.504631027Z" level=info msg="NRI interface is disabled by configuration." May 16 00:19:57.504693 containerd[1507]: time="2025-05-16T00:19:57.504644672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:19:57.505124 containerd[1507]: time="2025-05-16T00:19:57.505039603Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:19:57.505124 containerd[1507]: time="2025-05-16T00:19:57.505110801Z" level=info msg="Connect containerd service" May 16 00:19:57.505314 containerd[1507]: time="2025-05-16T00:19:57.505168987Z" level=info msg="using legacy CRI server" May 16 00:19:57.505314 containerd[1507]: time="2025-05-16T00:19:57.505180343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:19:57.505353 containerd[1507]: time="2025-05-16T00:19:57.505324629Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:19:57.508849 containerd[1507]: time="2025-05-16T00:19:57.508768899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:19:57.509096 containerd[1507]: time="2025-05-16T00:19:57.508965126Z" level=info msg="Start subscribing containerd event" May 16 00:19:57.509181 containerd[1507]: time="2025-05-16T00:19:57.509125797Z" level=info msg="Start recovering state" May 16 00:19:57.509318 containerd[1507]: time="2025-05-16T00:19:57.509255989Z" level=info msg="Start event monitor" May 16 00:19:57.509318 containerd[1507]: time="2025-05-16T00:19:57.509291704Z" level=info msg="Start snapshots syncer" May 16 00:19:57.509318 containerd[1507]: time="2025-05-16T00:19:57.509311245Z" level=info msg="Start cni network conf syncer for default" May 16 00:19:57.509452 containerd[1507]: time="2025-05-16T00:19:57.509324311Z" level=info msg="Start streaming server" May 16 00:19:57.509452 containerd[1507]: time="2025-05-16T00:19:57.509365516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:19:57.509452 containerd[1507]: time="2025-05-16T00:19:57.509442130Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:19:57.509688 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:19:57.510168 containerd[1507]: time="2025-05-16T00:19:57.510114521Z" level=info msg="containerd successfully booted in 0.396954s" May 16 00:19:57.868856 systemd-networkd[1419]: eth0: Gained IPv6LL May 16 00:19:57.873163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:19:57.876041 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:19:57.886129 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:19:57.889718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:19:57.892711 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:19:57.918530 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:19:57.922604 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). May 16 00:19:57.927322 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:19:57.927831 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:19:57.930167 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:19:57.935320 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:19:57.984588 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:57.986900 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:58.001619 systemd-logind[1486]: New session 1 of user core. May 16 00:19:58.003452 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:19:58.049236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:19:58.099337 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:19:58.134302 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:19:58.169383 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:19:58.337879 systemd[1587]: Queued start job for default target default.target. May 16 00:19:58.354429 systemd[1587]: Created slice app.slice - User Application Slice. May 16 00:19:58.354462 systemd[1587]: Reached target paths.target - Paths. May 16 00:19:58.354479 systemd[1587]: Reached target timers.target - Timers. May 16 00:19:58.356301 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:19:58.377570 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:19:58.377749 systemd[1587]: Reached target sockets.target - Sockets. May 16 00:19:58.377777 systemd[1587]: Reached target basic.target - Basic System. May 16 00:19:58.377843 systemd[1587]: Reached target default.target - Main User Target. May 16 00:19:58.377884 systemd[1587]: Startup finished in 189ms. May 16 00:19:58.378171 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:19:58.411991 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:19:58.503219 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:42810.service - OpenSSH per-connection server daemon (10.0.0.1:42810). May 16 00:19:58.558315 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 42810 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:58.560496 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:58.565632 systemd-logind[1486]: New session 2 of user core. May 16 00:19:58.574897 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:19:58.657981 sshd[1600]: Connection closed by 10.0.0.1 port 42810 May 16 00:19:58.658859 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 16 00:19:58.675216 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:42810.service: Deactivated successfully. May 16 00:19:58.677213 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:19:58.679493 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. May 16 00:19:58.680352 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:42824.service - OpenSSH per-connection server daemon (10.0.0.1:42824). May 16 00:19:58.700539 systemd-logind[1486]: Removed session 2. May 16 00:19:58.738605 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 42824 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:58.740615 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:58.745350 systemd-logind[1486]: New session 3 of user core. May 16 00:19:58.755937 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:19:58.849273 sshd[1607]: Connection closed by 10.0.0.1 port 42824 May 16 00:19:58.849829 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 16 00:19:58.863885 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:42824.service: Deactivated successfully. May 16 00:19:58.867673 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:19:58.868518 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. May 16 00:19:58.869672 systemd-logind[1486]: Removed session 3. May 16 00:19:59.421077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:19:59.425422 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:19:59.426867 systemd[1]: Startup finished in 889ms (kernel) + 8.489s (initrd) + 6.203s (userspace) = 15.582s. May 16 00:19:59.432159 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:19:59.871079 kubelet[1616]: E0516 00:19:59.870902 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:19:59.875984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:19:59.876202 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:19:59.876597 systemd[1]: kubelet.service: Consumed 1.647s CPU time. May 16 00:20:09.078622 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:46950.service - OpenSSH per-connection server daemon (10.0.0.1:46950). May 16 00:20:09.118017 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 46950 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.119539 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.123656 systemd-logind[1486]: New session 4 of user core. May 16 00:20:09.138860 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:20:09.195986 sshd[1631]: Connection closed by 10.0.0.1 port 46950 May 16 00:20:09.196399 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.208976 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:46950.service: Deactivated successfully. May 16 00:20:09.211043 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:20:09.212576 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. May 16 00:20:09.223025 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:46966.service - OpenSSH per-connection server daemon (10.0.0.1:46966). May 16 00:20:09.224199 systemd-logind[1486]: Removed session 4. May 16 00:20:09.259318 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 46966 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.260646 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.265074 systemd-logind[1486]: New session 5 of user core. May 16 00:20:09.275943 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:20:09.326791 sshd[1638]: Connection closed by 10.0.0.1 port 46966 May 16 00:20:09.327337 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.334614 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:46966.service: Deactivated successfully. May 16 00:20:09.336618 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:20:09.338409 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. May 16 00:20:09.339675 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:46972.service - OpenSSH per-connection server daemon (10.0.0.1:46972). May 16 00:20:09.340659 systemd-logind[1486]: Removed session 5. May 16 00:20:09.390487 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 46972 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.391925 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.396335 systemd-logind[1486]: New session 6 of user core. May 16 00:20:09.405822 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:20:09.460200 sshd[1645]: Connection closed by 10.0.0.1 port 46972 May 16 00:20:09.460512 sshd-session[1643]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.479659 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:46972.service: Deactivated successfully. May 16 00:20:09.482221 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:20:09.484308 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. May 16 00:20:09.496070 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:46980.service - OpenSSH per-connection server daemon (10.0.0.1:46980). May 16 00:20:09.497195 systemd-logind[1486]: Removed session 6. May 16 00:20:09.532238 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 46980 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.534174 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.538554 systemd-logind[1486]: New session 7 of user core. May 16 00:20:09.551850 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:20:09.615496 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:20:09.615883 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.633978 sudo[1653]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.635868 sshd[1652]: Connection closed by 10.0.0.1 port 46980 May 16 00:20:09.636261 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.661443 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:46980.service: Deactivated successfully. May 16 00:20:09.663865 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:20:09.665501 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. May 16 00:20:09.679235 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:46982.service - OpenSSH per-connection server daemon (10.0.0.1:46982). May 16 00:20:09.680347 systemd-logind[1486]: Removed session 7. May 16 00:20:09.713387 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 46982 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.714881 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.719584 systemd-logind[1486]: New session 8 of user core. May 16 00:20:09.732882 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:20:09.790385 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:20:09.790813 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.794557 sudo[1662]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.801343 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:20:09.801688 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.823112 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:20:09.855464 augenrules[1684]: No rules May 16 00:20:09.857880 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:20:09.858186 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:20:09.859470 sudo[1661]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.861046 sshd[1660]: Connection closed by 10.0.0.1 port 46982 May 16 00:20:09.861413 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.894480 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:46982.service: Deactivated successfully. May 16 00:20:09.897173 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:20:09.897922 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. May 16 00:20:09.898385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:20:09.901432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:09.902749 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:46984.service - OpenSSH per-connection server daemon (10.0.0.1:46984). May 16 00:20:09.903354 systemd-logind[1486]: Removed session 8. May 16 00:20:09.949990 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 46984 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.952111 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.957055 systemd-logind[1486]: New session 9 of user core. May 16 00:20:09.969048 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:20:10.029374 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:20:10.029905 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:10.103192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:10.109717 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:10.172457 kubelet[1714]: E0516 00:20:10.172126 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:10.179869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:10.180142 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:10.602168 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:20:10.602310 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:20:11.226815 dockerd[1733]: time="2025-05-16T00:20:11.226674934Z" level=info msg="Starting up" May 16 00:20:12.441821 dockerd[1733]: time="2025-05-16T00:20:12.441737906Z" level=info msg="Loading containers: start." May 16 00:20:12.873731 kernel: Initializing XFRM netlink socket May 16 00:20:12.987149 systemd-networkd[1419]: docker0: Link UP May 16 00:20:13.184418 dockerd[1733]: time="2025-05-16T00:20:13.184353095Z" level=info msg="Loading containers: done." May 16 00:20:13.229213 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1761914577-merged.mount: Deactivated successfully. May 16 00:20:13.234473 dockerd[1733]: time="2025-05-16T00:20:13.234407363Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:20:13.234622 dockerd[1733]: time="2025-05-16T00:20:13.234577722Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 00:20:13.234818 dockerd[1733]: time="2025-05-16T00:20:13.234779223Z" level=info msg="Daemon has completed initialization" May 16 00:20:13.521613 dockerd[1733]: time="2025-05-16T00:20:13.521534211Z" level=info msg="API listen on /run/docker.sock" May 16 00:20:13.521901 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:20:14.854545 containerd[1507]: time="2025-05-16T00:20:14.854494998Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:20:17.304874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627290186.mount: Deactivated successfully. May 16 00:20:20.430339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:20:20.439917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:20.607832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:20.612569 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:20.928507 kubelet[1957]: E0516 00:20:20.928279 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:20.933568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:20.933844 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:24.058452 containerd[1507]: time="2025-05-16T00:20:24.058386570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.128859 containerd[1507]: time="2025-05-16T00:20:24.128781110Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 00:20:24.163940 containerd[1507]: time="2025-05-16T00:20:24.163863012Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.196060 containerd[1507]: time="2025-05-16T00:20:24.196014675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.197224 containerd[1507]: time="2025-05-16T00:20:24.197201379Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 9.342663981s" May 16 00:20:24.197298 containerd[1507]: time="2025-05-16T00:20:24.197229037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 00:20:24.197956 containerd[1507]: time="2025-05-16T00:20:24.197916882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:20:28.489521 containerd[1507]: time="2025-05-16T00:20:28.489438014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.505201 containerd[1507]: time="2025-05-16T00:20:28.505093543Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 00:20:28.528571 containerd[1507]: time="2025-05-16T00:20:28.528498005Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.550209 containerd[1507]: time="2025-05-16T00:20:28.550117825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.551404 containerd[1507]: time="2025-05-16T00:20:28.551348243Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 4.353396785s" May 16 00:20:28.551404 containerd[1507]: time="2025-05-16T00:20:28.551391270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 00:20:28.552096 containerd[1507]: time="2025-05-16T00:20:28.552068154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:20:30.937822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:20:30.946946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:31.190928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:31.196512 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:32.128089 kubelet[2014]: E0516 00:20:32.128002 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:32.132582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:32.132831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:32.939125 containerd[1507]: time="2025-05-16T00:20:32.939032050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.049683 containerd[1507]: time="2025-05-16T00:20:33.049591960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 00:20:33.095416 containerd[1507]: time="2025-05-16T00:20:33.095335170Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.145482 containerd[1507]: time="2025-05-16T00:20:33.145425527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.146730 containerd[1507]: time="2025-05-16T00:20:33.146677241Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 4.594576415s" May 16 00:20:33.146730 containerd[1507]: time="2025-05-16T00:20:33.146724420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 00:20:33.147241 containerd[1507]: time="2025-05-16T00:20:33.147205654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:20:35.336936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484473468.mount: Deactivated successfully. May 16 00:20:38.023059 containerd[1507]: time="2025-05-16T00:20:38.022981492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.074820 containerd[1507]: time="2025-05-16T00:20:38.074732933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 00:20:38.119314 containerd[1507]: time="2025-05-16T00:20:38.119227003Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.163206 containerd[1507]: time="2025-05-16T00:20:38.163148097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.164319 containerd[1507]: time="2025-05-16T00:20:38.164269740Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 5.017029634s" May 16 00:20:38.164319 containerd[1507]: time="2025-05-16T00:20:38.164307547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 00:20:38.164995 containerd[1507]: time="2025-05-16T00:20:38.164963966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:20:39.733867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274439198.mount: Deactivated successfully. May 16 00:20:41.874882 update_engine[1491]: I20250516 00:20:41.874668 1491 update_attempter.cc:509] Updating boot flags... May 16 00:20:42.187792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 00:20:42.201916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:44.418728 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2055) May 16 00:20:44.948025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2054) May 16 00:20:45.036543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:45.041333 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:45.293266 kubelet[2068]: E0516 00:20:45.293109 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:45.297124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:45.297375 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:45.692729 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2054) May 16 00:20:48.179121 containerd[1507]: time="2025-05-16T00:20:48.179045577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.216245 containerd[1507]: time="2025-05-16T00:20:48.216136474Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:20:48.249188 containerd[1507]: time="2025-05-16T00:20:48.249095334Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.281021 containerd[1507]: time="2025-05-16T00:20:48.280931661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.282582 containerd[1507]: time="2025-05-16T00:20:48.282522484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 10.117526811s" May 16 00:20:48.282648 containerd[1507]: time="2025-05-16T00:20:48.282582791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:20:48.283206 containerd[1507]: time="2025-05-16T00:20:48.283178142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:20:49.568559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816133576.mount: Deactivated successfully. May 16 00:20:49.851314 containerd[1507]: time="2025-05-16T00:20:49.851091719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.861325 containerd[1507]: time="2025-05-16T00:20:49.861244247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:20:49.878102 containerd[1507]: time="2025-05-16T00:20:49.878036319Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.890099 containerd[1507]: time="2025-05-16T00:20:49.890021540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.890784 containerd[1507]: time="2025-05-16T00:20:49.890739167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.607525915s" May 16 00:20:49.890784 containerd[1507]: time="2025-05-16T00:20:49.890770508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:20:49.891266 containerd[1507]: time="2025-05-16T00:20:49.891243890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:20:52.914648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168619490.mount: Deactivated successfully. May 16 00:20:55.437611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 16 00:20:55.450100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:55.663240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:55.682102 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:55.737411 kubelet[2162]: E0516 00:20:55.737193 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:55.741883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:55.742157 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:58.978896 containerd[1507]: time="2025-05-16T00:20:58.978837337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.035177 containerd[1507]: time="2025-05-16T00:20:59.035067333Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 00:20:59.100858 containerd[1507]: time="2025-05-16T00:20:59.100808324Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.206588 containerd[1507]: time="2025-05-16T00:20:59.206505890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.208017 containerd[1507]: time="2025-05-16T00:20:59.207989634Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.316714724s" May 16 00:20:59.208083 containerd[1507]: time="2025-05-16T00:20:59.208023125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 00:21:01.621840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:01.632969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:01.658810 systemd[1]: Reloading requested from client PID 2222 ('systemctl') (unit session-9.scope)... May 16 00:21:01.658827 systemd[1]: Reloading... May 16 00:21:01.761718 zram_generator::config[2267]: No configuration found. May 16 00:21:02.543669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:02.627402 systemd[1]: Reloading finished in 968 ms. May 16 00:21:02.682523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:21:02.682620 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:21:02.682938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:02.693159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:03.039654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:03.045752 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:03.095485 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:03.095485 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:21:03.095485 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:03.095927 kubelet[2309]: I0516 00:21:03.095536 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:03.392214 kubelet[2309]: I0516 00:21:03.392095 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:21:03.392214 kubelet[2309]: I0516 00:21:03.392130 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:03.392457 kubelet[2309]: I0516 00:21:03.392435 2309 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:21:03.443950 kubelet[2309]: I0516 00:21:03.443864 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:03.449132 kubelet[2309]: E0516 00:21:03.449076 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:03.462212 kubelet[2309]: E0516 00:21:03.462134 2309 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:21:03.462212 kubelet[2309]: I0516 00:21:03.462200 2309 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:21:03.467665 kubelet[2309]: I0516 00:21:03.467634 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:03.470639 kubelet[2309]: I0516 00:21:03.470587 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:03.470817 kubelet[2309]: I0516 00:21:03.470624 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:03.470817 kubelet[2309]: I0516 00:21:03.470814 2309 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:03.471084 kubelet[2309]: I0516 00:21:03.470824 2309 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:21:03.471117 kubelet[2309]: I0516 00:21:03.471090 2309 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:03.479874 kubelet[2309]: I0516 00:21:03.479830 2309 kubelet.go:446] "Attempting to sync node with API server" May 16 00:21:03.504232 kubelet[2309]: I0516 00:21:03.504168 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:03.504232 kubelet[2309]: I0516 00:21:03.504236 2309 kubelet.go:352] "Adding apiserver pod source" May 16 00:21:03.504408 kubelet[2309]: I0516 00:21:03.504255 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:03.505233 kubelet[2309]: W0516 00:21:03.505166 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:03.505293 kubelet[2309]: E0516 00:21:03.505262 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:03.506362 kubelet[2309]: W0516 00:21:03.506302 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:03.506362 kubelet[2309]: E0516 00:21:03.506348 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:03.507250 kubelet[2309]: I0516 00:21:03.507224 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:21:03.507578 kubelet[2309]: I0516 00:21:03.507561 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:21:03.510495 kubelet[2309]: W0516 00:21:03.510478 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:21:03.516710 kubelet[2309]: I0516 00:21:03.516665 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:21:03.516767 kubelet[2309]: I0516 00:21:03.516732 2309 server.go:1287] "Started kubelet" May 16 00:21:03.517332 kubelet[2309]: I0516 00:21:03.516848 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:03.517332 kubelet[2309]: I0516 00:21:03.517243 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:03.517332 kubelet[2309]: I0516 00:21:03.517301 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:03.518195 kubelet[2309]: I0516 00:21:03.518151 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:03.518242 kubelet[2309]: I0516 00:21:03.518221 2309 server.go:479] "Adding debug handlers to kubelet server" May 16 00:21:03.519125 kubelet[2309]: I0516 00:21:03.519051 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:03.521316 kubelet[2309]: E0516 00:21:03.521261 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:03.521316 kubelet[2309]: I0516 00:21:03.521305 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:21:03.521475 kubelet[2309]: I0516 00:21:03.521456 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:21:03.521519 kubelet[2309]: I0516 00:21:03.521500 2309 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:03.522125 kubelet[2309]: W0516 00:21:03.521820 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:03.522125 kubelet[2309]: E0516 00:21:03.521858 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:03.523745 kubelet[2309]: I0516 00:21:03.523561 2309 factory.go:221] Registration of the systemd container factory successfully May 16 00:21:03.523896 kubelet[2309]: I0516 00:21:03.523798 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:03.525082 kubelet[2309]: E0516 00:21:03.524855 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 16 00:21:03.525082 kubelet[2309]: E0516 00:21:03.524976 2309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:21:03.525536 kubelet[2309]: I0516 00:21:03.525511 2309 factory.go:221] Registration of the containerd container factory successfully May 16 00:21:03.614819 kubelet[2309]: E0516 00:21:03.608180 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd9fd907692ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:21:03.516684972 +0000 UTC m=+0.466741747,LastTimestamp:2025-05-16 00:21:03.516684972 +0000 UTC m=+0.466741747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:21:03.622532 kubelet[2309]: E0516 00:21:03.621424 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:03.622532 kubelet[2309]: I0516 00:21:03.622334 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:21:03.622532 kubelet[2309]: I0516 00:21:03.622358 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:21:03.622532 kubelet[2309]: I0516 00:21:03.622380 2309 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:03.623430 kubelet[2309]: I0516 00:21:03.623392 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:21:03.624910 kubelet[2309]: I0516 00:21:03.624874 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:21:03.624910 kubelet[2309]: I0516 00:21:03.624907 2309 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:21:03.726850 kubelet[2309]: I0516 00:21:03.624929 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:21:03.726850 kubelet[2309]: I0516 00:21:03.624938 2309 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:21:03.726850 kubelet[2309]: E0516 00:21:03.624983 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:03.726850 kubelet[2309]: E0516 00:21:03.722246 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:03.726850 kubelet[2309]: E0516 00:21:03.725494 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:03.726850 kubelet[2309]: E0516 00:21:03.725977 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 16 00:21:03.727436 kubelet[2309]: W0516 00:21:03.727265 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:03.727565 kubelet[2309]: E0516 00:21:03.727444 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:03.823045 kubelet[2309]: E0516 00:21:03.822988 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:03.923617 kubelet[2309]: E0516 00:21:03.923563 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:03.925895 kubelet[2309]: E0516 00:21:03.925854 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:04.023968 kubelet[2309]: E0516 00:21:04.023845 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.124400 kubelet[2309]: E0516 00:21:04.124351 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.126972 kubelet[2309]: E0516 00:21:04.126919 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 16 00:21:04.225369 kubelet[2309]: E0516 00:21:04.225290 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.326095 kubelet[2309]: E0516 00:21:04.325923 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.326095 kubelet[2309]: E0516 00:21:04.325951 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:04.426840 kubelet[2309]: E0516 00:21:04.426758 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.526929 kubelet[2309]: E0516 00:21:04.526880 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.627821 kubelet[2309]: E0516 00:21:04.627636 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.636337 kubelet[2309]: W0516 00:21:04.636260 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:04.636470 kubelet[2309]: E0516 00:21:04.636338 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.728327 kubelet[2309]: E0516 00:21:04.728233 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.828972 kubelet[2309]: E0516 00:21:04.828922 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.858647 kubelet[2309]: W0516 00:21:04.858579 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:04.858647 kubelet[2309]: E0516 00:21:04.858650 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.927924 kubelet[2309]: E0516 00:21:04.927801 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 16 00:21:04.930107 kubelet[2309]: E0516 00:21:04.930058 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.008180 kubelet[2309]: W0516 00:21:05.008107 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:05.008180 kubelet[2309]: E0516 00:21:05.008181 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:05.031006 kubelet[2309]: E0516 00:21:05.030944 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.126636 kubelet[2309]: E0516 00:21:05.126572 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:05.131823 kubelet[2309]: E0516 00:21:05.131782 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.228838 kubelet[2309]: W0516 00:21:05.228787 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:05.228838 kubelet[2309]: E0516 00:21:05.228842 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:05.232221 kubelet[2309]: E0516 00:21:05.232189 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.332953 kubelet[2309]: E0516 00:21:05.332890 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.433804 kubelet[2309]: E0516 00:21:05.433747 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.484329 kubelet[2309]: E0516 00:21:05.484204 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:05.534825 kubelet[2309]: E0516 00:21:05.534759 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.635547 kubelet[2309]: E0516 00:21:05.635502 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.736181 kubelet[2309]: E0516 00:21:05.736039 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.811138 kubelet[2309]: I0516 00:21:05.811076 2309 policy_none.go:49] "None policy: Start" May 16 00:21:05.811138 kubelet[2309]: I0516 00:21:05.811132 2309 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:21:05.811138 kubelet[2309]: I0516 00:21:05.811151 2309 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:05.818762 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:21:05.836290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:21:05.836476 kubelet[2309]: E0516 00:21:05.836431 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.839871 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:21:05.856197 kubelet[2309]: I0516 00:21:05.856139 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:21:05.856490 kubelet[2309]: I0516 00:21:05.856448 2309 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:05.856611 kubelet[2309]: I0516 00:21:05.856467 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:05.857306 kubelet[2309]: I0516 00:21:05.856902 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:05.858131 kubelet[2309]: E0516 00:21:05.858106 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:21:05.858190 kubelet[2309]: E0516 00:21:05.858151 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:21:05.959141 kubelet[2309]: I0516 00:21:05.959097 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:05.959678 kubelet[2309]: E0516 00:21:05.959612 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 16 00:21:06.161449 kubelet[2309]: I0516 00:21:06.161252 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:06.162213 kubelet[2309]: E0516 00:21:06.161860 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 16 00:21:06.528834 kubelet[2309]: E0516 00:21:06.528760 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" May 16 00:21:06.563548 kubelet[2309]: I0516 00:21:06.563516 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:06.563990 kubelet[2309]: E0516 00:21:06.563944 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 16 00:21:06.738477 systemd[1]: Created slice kubepods-burstable-podaf2b8dd06322531559039d0386741490.slice - libcontainer container kubepods-burstable-podaf2b8dd06322531559039d0386741490.slice. May 16 00:21:06.754987 kubelet[2309]: E0516 00:21:06.754950 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:06.759410 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 00:21:06.771266 kubelet[2309]: E0516 00:21:06.771221 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:06.773113 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 00:21:06.775252 kubelet[2309]: E0516 00:21:06.775215 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:06.844113 kubelet[2309]: I0516 00:21:06.843949 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.844113 kubelet[2309]: I0516 00:21:06.844008 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.844113 kubelet[2309]: I0516 00:21:06.844039 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:06.844113 kubelet[2309]: I0516 00:21:06.844074 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.844113 kubelet[2309]: I0516 00:21:06.844102 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.844415 kubelet[2309]: I0516 00:21:06.844124 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.844415 kubelet[2309]: I0516 00:21:06.844143 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.844415 kubelet[2309]: I0516 00:21:06.844278 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.844415 kubelet[2309]: I0516 00:21:06.844305 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:07.001021 kubelet[2309]: W0516 00:21:07.000904 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:07.001021 kubelet[2309]: E0516 00:21:07.000988 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:07.056006 kubelet[2309]: E0516 00:21:07.055958 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:07.056757 containerd[1507]: time="2025-05-16T00:21:07.056709881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af2b8dd06322531559039d0386741490,Namespace:kube-system,Attempt:0,}" May 16 00:21:07.072136 kubelet[2309]: E0516 00:21:07.072082 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:07.072715 containerd[1507]: time="2025-05-16T00:21:07.072656116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 00:21:07.075991 kubelet[2309]: E0516 00:21:07.075940 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:07.076415 containerd[1507]: time="2025-05-16T00:21:07.076374116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 00:21:07.119336 kubelet[2309]: W0516 00:21:07.119180 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:07.119336 kubelet[2309]: E0516 00:21:07.119248 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:07.270708 kubelet[2309]: W0516 00:21:07.270642 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:07.271191 kubelet[2309]: E0516 00:21:07.270732 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:07.366228 kubelet[2309]: I0516 00:21:07.366177 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:07.366728 kubelet[2309]: E0516 00:21:07.366653 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 16 00:21:07.618313 kubelet[2309]: W0516 00:21:07.618255 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 16 00:21:07.618449 kubelet[2309]: E0516 00:21:07.618330 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:07.763993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121319497.mount: Deactivated successfully. May 16 00:21:07.778140 containerd[1507]: time="2025-05-16T00:21:07.778037280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.781261 containerd[1507]: time="2025-05-16T00:21:07.781170343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 16 00:21:07.782342 containerd[1507]: time="2025-05-16T00:21:07.782296107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.784449 containerd[1507]: time="2025-05-16T00:21:07.784410739Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.785368 containerd[1507]: time="2025-05-16T00:21:07.785312145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:21:07.786863 containerd[1507]: time="2025-05-16T00:21:07.786830637Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.787520 containerd[1507]: time="2025-05-16T00:21:07.787474346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:21:07.792617 containerd[1507]: time="2025-05-16T00:21:07.792565715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.795313 containerd[1507]: time="2025-05-16T00:21:07.795222677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 722.428894ms" May 16 00:21:07.796137 containerd[1507]: time="2025-05-16T00:21:07.796101806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.266876ms" May 16 00:21:07.800002 containerd[1507]: time="2025-05-16T00:21:07.799915065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.423384ms" May 16 00:21:08.390979 containerd[1507]: time="2025-05-16T00:21:08.389423750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.390979 containerd[1507]: time="2025-05-16T00:21:08.389496681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.390979 containerd[1507]: time="2025-05-16T00:21:08.389510029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.390979 containerd[1507]: time="2025-05-16T00:21:08.389608915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.391755 containerd[1507]: time="2025-05-16T00:21:08.388731432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.391755 containerd[1507]: time="2025-05-16T00:21:08.391174225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.391755 containerd[1507]: time="2025-05-16T00:21:08.391217444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.391755 containerd[1507]: time="2025-05-16T00:21:08.391427370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.402848 containerd[1507]: time="2025-05-16T00:21:08.402624361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.402848 containerd[1507]: time="2025-05-16T00:21:08.402729720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.402848 containerd[1507]: time="2025-05-16T00:21:08.402770063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.413559 containerd[1507]: time="2025-05-16T00:21:08.413250907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.598784 systemd[1]: Started cri-containerd-4ce1e4bc4a69688baccca6722e195ce257c5ec12e54df94dd1b45b0227fa124c.scope - libcontainer container 4ce1e4bc4a69688baccca6722e195ce257c5ec12e54df94dd1b45b0227fa124c. May 16 00:21:08.602316 systemd[1]: Started cri-containerd-0f84ce1d446b137ec6f15e376a00cfd669db95a7bfc3667253bade65b5937fc0.scope - libcontainer container 0f84ce1d446b137ec6f15e376a00cfd669db95a7bfc3667253bade65b5937fc0. May 16 00:21:08.608542 systemd[1]: Started cri-containerd-1d273f76aec5f7b11e2efba74071b48dff382e6bb7e4316ee5c5bf59b91c882e.scope - libcontainer container 1d273f76aec5f7b11e2efba74071b48dff382e6bb7e4316ee5c5bf59b91c882e. May 16 00:21:08.663931 containerd[1507]: time="2025-05-16T00:21:08.663760714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ce1e4bc4a69688baccca6722e195ce257c5ec12e54df94dd1b45b0227fa124c\"" May 16 00:21:08.668720 kubelet[2309]: E0516 00:21:08.666232 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.669503 containerd[1507]: time="2025-05-16T00:21:08.669457644Z" level=info msg="CreateContainer within sandbox \"4ce1e4bc4a69688baccca6722e195ce257c5ec12e54df94dd1b45b0227fa124c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:21:08.671196 containerd[1507]: time="2025-05-16T00:21:08.671163686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af2b8dd06322531559039d0386741490,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d273f76aec5f7b11e2efba74071b48dff382e6bb7e4316ee5c5bf59b91c882e\"" May 16 00:21:08.672382 kubelet[2309]: E0516 00:21:08.672342 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.674391 containerd[1507]: time="2025-05-16T00:21:08.674345132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f84ce1d446b137ec6f15e376a00cfd669db95a7bfc3667253bade65b5937fc0\"" May 16 00:21:08.674890 containerd[1507]: time="2025-05-16T00:21:08.674858869Z" level=info msg="CreateContainer within sandbox \"1d273f76aec5f7b11e2efba74071b48dff382e6bb7e4316ee5c5bf59b91c882e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:21:08.675254 kubelet[2309]: E0516 00:21:08.675224 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.680704 containerd[1507]: time="2025-05-16T00:21:08.680641045Z" level=info msg="CreateContainer within sandbox \"0f84ce1d446b137ec6f15e376a00cfd669db95a7bfc3667253bade65b5937fc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:21:08.968752 kubelet[2309]: I0516 00:21:08.968716 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:08.969233 kubelet[2309]: E0516 00:21:08.969185 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 16 00:21:09.010268 containerd[1507]: time="2025-05-16T00:21:09.010197481Z" level=info msg="CreateContainer within sandbox \"0f84ce1d446b137ec6f15e376a00cfd669db95a7bfc3667253bade65b5937fc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e071913d3f7f26520aee6ed6c761b08540f521a7d616ef5983881ff1464f6bba\"" May 16 00:21:09.011115 containerd[1507]: time="2025-05-16T00:21:09.011079077Z" level=info msg="StartContainer for \"e071913d3f7f26520aee6ed6c761b08540f521a7d616ef5983881ff1464f6bba\"" May 16 00:21:09.012665 containerd[1507]: time="2025-05-16T00:21:09.012622134Z" level=info msg="CreateContainer within sandbox \"4ce1e4bc4a69688baccca6722e195ce257c5ec12e54df94dd1b45b0227fa124c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05a5c40ed62f8b771152d0998956cdc155e281e53878fe89245bfe45eeed34f6\"" May 16 00:21:09.013078 containerd[1507]: time="2025-05-16T00:21:09.012978602Z" level=info msg="StartContainer for \"05a5c40ed62f8b771152d0998956cdc155e281e53878fe89245bfe45eeed34f6\"" May 16 00:21:09.015499 containerd[1507]: time="2025-05-16T00:21:09.015397323Z" level=info msg="CreateContainer within sandbox \"1d273f76aec5f7b11e2efba74071b48dff382e6bb7e4316ee5c5bf59b91c882e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96067b5d5a5ec1194b076c9bcae849dcb4128036f158d8930f7ef740d3090700\"" May 16 00:21:09.017517 containerd[1507]: time="2025-05-16T00:21:09.016019973Z" level=info msg="StartContainer for \"96067b5d5a5ec1194b076c9bcae849dcb4128036f158d8930f7ef740d3090700\"" May 16 00:21:09.095050 systemd[1]: Started cri-containerd-05a5c40ed62f8b771152d0998956cdc155e281e53878fe89245bfe45eeed34f6.scope - libcontainer container 05a5c40ed62f8b771152d0998956cdc155e281e53878fe89245bfe45eeed34f6. May 16 00:21:09.097006 systemd[1]: Started cri-containerd-e071913d3f7f26520aee6ed6c761b08540f521a7d616ef5983881ff1464f6bba.scope - libcontainer container e071913d3f7f26520aee6ed6c761b08540f521a7d616ef5983881ff1464f6bba. May 16 00:21:09.116912 systemd[1]: Started cri-containerd-96067b5d5a5ec1194b076c9bcae849dcb4128036f158d8930f7ef740d3090700.scope - libcontainer container 96067b5d5a5ec1194b076c9bcae849dcb4128036f158d8930f7ef740d3090700. May 16 00:21:09.197354 containerd[1507]: time="2025-05-16T00:21:09.197283456Z" level=info msg="StartContainer for \"05a5c40ed62f8b771152d0998956cdc155e281e53878fe89245bfe45eeed34f6\" returns successfully" May 16 00:21:09.197525 containerd[1507]: time="2025-05-16T00:21:09.197497139Z" level=info msg="StartContainer for \"e071913d3f7f26520aee6ed6c761b08540f521a7d616ef5983881ff1464f6bba\" returns successfully" May 16 00:21:09.197679 containerd[1507]: time="2025-05-16T00:21:09.197538565Z" level=info msg="StartContainer for \"96067b5d5a5ec1194b076c9bcae849dcb4128036f158d8930f7ef740d3090700\" returns successfully" May 16 00:21:09.642679 kubelet[2309]: E0516 00:21:09.642652 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:09.644928 kubelet[2309]: E0516 00:21:09.642970 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:09.646908 kubelet[2309]: E0516 00:21:09.646731 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:09.646908 kubelet[2309]: E0516 00:21:09.646862 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:09.650206 kubelet[2309]: E0516 00:21:09.650189 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:09.650298 kubelet[2309]: E0516 00:21:09.650274 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:10.667405 kubelet[2309]: E0516 00:21:10.667348 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:10.668074 kubelet[2309]: E0516 00:21:10.667534 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:10.668074 kubelet[2309]: E0516 00:21:10.667923 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:10.668074 kubelet[2309]: E0516 00:21:10.668049 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:11.220139 kubelet[2309]: E0516 00:21:11.220063 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:21:11.668254 kubelet[2309]: E0516 00:21:11.668145 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:21:11.668625 kubelet[2309]: E0516 00:21:11.668268 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:12.072832 kubelet[2309]: E0516 00:21:12.072795 2309 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 00:21:12.171446 kubelet[2309]: I0516 00:21:12.171388 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:12.219307 kubelet[2309]: I0516 00:21:12.219250 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:21:12.219307 kubelet[2309]: E0516 00:21:12.219297 2309 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:21:12.265584 kubelet[2309]: E0516 00:21:12.265535 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.366672 kubelet[2309]: E0516 00:21:12.366541 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.467223 kubelet[2309]: E0516 00:21:12.467150 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.567576 kubelet[2309]: E0516 00:21:12.567521 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.667730 kubelet[2309]: E0516 00:21:12.667579 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.768122 kubelet[2309]: E0516 00:21:12.768072 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.868567 kubelet[2309]: E0516 00:21:12.868516 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:12.968974 kubelet[2309]: E0516 00:21:12.968895 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:13.070055 kubelet[2309]: E0516 00:21:13.070008 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:13.171254 kubelet[2309]: E0516 00:21:13.171197 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:13.271925 kubelet[2309]: E0516 00:21:13.271773 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:13.372853 kubelet[2309]: E0516 00:21:13.372785 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:13.425075 kubelet[2309]: I0516 00:21:13.425027 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:13.480611 kubelet[2309]: I0516 00:21:13.480558 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:13.511163 kubelet[2309]: I0516 00:21:13.511091 2309 apiserver.go:52] "Watching apiserver" May 16 00:21:13.513359 kubelet[2309]: E0516 00:21:13.513334 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:13.521808 kubelet[2309]: I0516 00:21:13.521767 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:21:13.672875 kubelet[2309]: I0516 00:21:13.672501 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:13.672875 kubelet[2309]: E0516 00:21:13.672675 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:13.709358 kubelet[2309]: E0516 00:21:13.709311 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:15.139542 kubelet[2309]: E0516 00:21:15.139484 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:17.733511 kubelet[2309]: E0516 00:21:17.733446 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:20.331508 kubelet[2309]: E0516 00:21:20.331456 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:20.414232 kubelet[2309]: I0516 00:21:20.413921 2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.41387334 podStartE2EDuration="7.41387334s" podCreationTimestamp="2025-05-16 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:13.738711229 +0000 UTC m=+10.688768014" watchObservedRunningTime="2025-05-16 00:21:20.41387334 +0000 UTC m=+17.363930125" May 16 00:21:20.471329 kubelet[2309]: I0516 00:21:20.471240 2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.471213162 podStartE2EDuration="7.471213162s" podCreationTimestamp="2025-05-16 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:20.4263752 +0000 UTC m=+17.376431975" watchObservedRunningTime="2025-05-16 00:21:20.471213162 +0000 UTC m=+17.421269937" May 16 00:21:20.471558 kubelet[2309]: I0516 00:21:20.471374 2309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.47136931 podStartE2EDuration="7.47136931s" podCreationTimestamp="2025-05-16 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:20.414253915 +0000 UTC m=+17.364310690" watchObservedRunningTime="2025-05-16 00:21:20.47136931 +0000 UTC m=+17.421426075" May 16 00:21:21.222575 systemd[1]: Reloading requested from client PID 2595 ('systemctl') (unit session-9.scope)... May 16 00:21:21.222600 systemd[1]: Reloading... May 16 00:21:21.323806 zram_generator::config[2637]: No configuration found. May 16 00:21:21.452748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:21.555305 systemd[1]: Reloading finished in 332 ms. May 16 00:21:21.608326 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:21.622489 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:21:21.622852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:21.622918 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 137.9M memory peak, 0B memory swap peak. May 16 00:21:21.631410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:21.869927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:21.876298 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:21.922051 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:21.922051 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:21:21.922051 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:21.922444 kubelet[2679]: I0516 00:21:21.922131 2679 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:21.928752 kubelet[2679]: I0516 00:21:21.928717 2679 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:21:21.928752 kubelet[2679]: I0516 00:21:21.928742 2679 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:21.929056 kubelet[2679]: I0516 00:21:21.929034 2679 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:21:21.930430 kubelet[2679]: I0516 00:21:21.930305 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:21:21.933048 kubelet[2679]: I0516 00:21:21.932953 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:21.939566 kubelet[2679]: E0516 00:21:21.939516 2679 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:21:21.939566 kubelet[2679]: I0516 00:21:21.939562 2679 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:21:21.944424 kubelet[2679]: I0516 00:21:21.944378 2679 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:21.944634 kubelet[2679]: I0516 00:21:21.944595 2679 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:21.944808 kubelet[2679]: I0516 00:21:21.944622 2679 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:21.944808 kubelet[2679]: I0516 00:21:21.944807 2679 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:21.944989 kubelet[2679]: I0516 00:21:21.944818 2679 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:21:21.944989 kubelet[2679]: I0516 00:21:21.944872 2679 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:21.945076 kubelet[2679]: I0516 00:21:21.945021 2679 kubelet.go:446] "Attempting to sync node with API server" May 16 00:21:21.945076 kubelet[2679]: I0516 00:21:21.945044 2679 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:21.945076 kubelet[2679]: I0516 00:21:21.945061 2679 kubelet.go:352] "Adding apiserver pod source" May 16 00:21:21.945076 kubelet[2679]: I0516 00:21:21.945072 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:21.946215 kubelet[2679]: I0516 00:21:21.946182 2679 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:21:21.946587 kubelet[2679]: I0516 00:21:21.946562 2679 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.946982 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.947016 2679 server.go:1287] "Started kubelet" May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.947119 2679 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.947309 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.947584 2679 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:21.948183 kubelet[2679]: I0516 00:21:21.947940 2679 server.go:479] "Adding debug handlers to kubelet server" May 16 00:21:21.953896 kubelet[2679]: I0516 00:21:21.953851 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:21.955500 kubelet[2679]: E0516 00:21:21.955420 2679 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:21:21.955669 kubelet[2679]: E0516 00:21:21.955640 2679 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:21.955923 kubelet[2679]: I0516 00:21:21.955869 2679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:21:21.956067 kubelet[2679]: I0516 00:21:21.956044 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:21:21.956408 kubelet[2679]: I0516 00:21:21.956381 2679 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:21.958430 kubelet[2679]: I0516 00:21:21.958144 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:21.965717 kubelet[2679]: I0516 00:21:21.963206 2679 factory.go:221] Registration of the containerd container factory successfully May 16 00:21:21.965717 kubelet[2679]: I0516 00:21:21.963231 2679 factory.go:221] Registration of the systemd container factory successfully May 16 00:21:21.965717 kubelet[2679]: I0516 00:21:21.963311 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:21.975660 kubelet[2679]: I0516 00:21:21.975601 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:21:21.976995 kubelet[2679]: I0516 00:21:21.976962 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:21:21.976995 kubelet[2679]: I0516 00:21:21.976991 2679 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:21:21.977104 kubelet[2679]: I0516 00:21:21.977024 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:21:21.977104 kubelet[2679]: I0516 00:21:21.977032 2679 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:21:21.977104 kubelet[2679]: E0516 00:21:21.977078 2679 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:22.009577 kubelet[2679]: I0516 00:21:22.009540 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:21:22.009577 kubelet[2679]: I0516 00:21:22.009566 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:21:22.009577 kubelet[2679]: I0516 00:21:22.009589 2679 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:22.009872 kubelet[2679]: I0516 00:21:22.009824 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:21:22.009872 kubelet[2679]: I0516 00:21:22.009840 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:21:22.009872 kubelet[2679]: I0516 00:21:22.009862 2679 policy_none.go:49] "None policy: Start" May 16 00:21:22.009872 kubelet[2679]: I0516 00:21:22.009871 2679 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:21:22.009961 kubelet[2679]: I0516 00:21:22.009884 2679 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:22.010001 kubelet[2679]: I0516 00:21:22.009989 2679 state_mem.go:75] "Updated machine memory state" May 16 00:21:22.014406 kubelet[2679]: I0516 00:21:22.014366 2679 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:21:22.014644 kubelet[2679]: I0516 00:21:22.014623 2679 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:22.014727 kubelet[2679]: I0516 00:21:22.014645 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:22.014968 kubelet[2679]: I0516 00:21:22.014930 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:22.018722 kubelet[2679]: E0516 00:21:22.017579 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:21:22.078045 kubelet[2679]: I0516 00:21:22.077988 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.078045 kubelet[2679]: I0516 00:21:22.078050 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:22.078245 kubelet[2679]: I0516 00:21:22.077988 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.106079 kubelet[2679]: E0516 00:21:22.105987 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:21:22.106257 kubelet[2679]: E0516 00:21:22.106112 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.106257 kubelet[2679]: E0516 00:21:22.106169 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.122604 kubelet[2679]: I0516 00:21:22.120765 2679 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:21:22.133023 kubelet[2679]: I0516 00:21:22.132971 2679 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:21:22.133170 kubelet[2679]: I0516 00:21:22.133093 2679 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:21:22.158125 kubelet[2679]: I0516 00:21:22.158061 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.158125 kubelet[2679]: I0516 00:21:22.158121 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.158345 kubelet[2679]: I0516 00:21:22.158152 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.158345 kubelet[2679]: I0516 00:21:22.158177 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:22.158345 kubelet[2679]: I0516 00:21:22.158201 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.158345 kubelet[2679]: I0516 00:21:22.158221 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.158345 kubelet[2679]: I0516 00:21:22.158300 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af2b8dd06322531559039d0386741490-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af2b8dd06322531559039d0386741490\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.158511 kubelet[2679]: I0516 00:21:22.158323 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.158511 kubelet[2679]: I0516 00:21:22.158341 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:22.407488 kubelet[2679]: E0516 00:21:22.407215 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:22.407488 kubelet[2679]: E0516 00:21:22.407225 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:22.409983 kubelet[2679]: E0516 00:21:22.409886 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:22.946271 kubelet[2679]: I0516 00:21:22.946224 2679 apiserver.go:52] "Watching apiserver" May 16 00:21:22.956250 kubelet[2679]: I0516 00:21:22.956212 2679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:21:22.991796 kubelet[2679]: E0516 00:21:22.991729 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:22.991796 kubelet[2679]: I0516 00:21:22.991731 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:21:22.991952 kubelet[2679]: I0516 00:21:22.991828 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:21:23.019914 kubelet[2679]: E0516 00:21:23.019392 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:21:23.019914 kubelet[2679]: E0516 00:21:23.019782 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:23.020377 kubelet[2679]: E0516 00:21:23.020352 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:21:23.020617 kubelet[2679]: E0516 00:21:23.020600 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:23.993614 kubelet[2679]: E0516 00:21:23.993566 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:23.994241 kubelet[2679]: E0516 00:21:23.993664 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:24.995100 kubelet[2679]: E0516 00:21:24.995002 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:24.995561 kubelet[2679]: E0516 00:21:24.995164 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.831599 kubelet[2679]: I0516 00:21:25.831537 2679 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:21:25.832208 containerd[1507]: time="2025-05-16T00:21:25.832150109Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:21:25.832573 kubelet[2679]: I0516 00:21:25.832507 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:21:25.996675 kubelet[2679]: E0516 00:21:25.996638 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:28.527881 kubelet[2679]: E0516 00:21:28.527834 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:28.912356 kubelet[2679]: E0516 00:21:28.911776 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:29.002761 kubelet[2679]: E0516 00:21:29.001550 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:29.002761 kubelet[2679]: E0516 00:21:29.001910 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:30.002993 kubelet[2679]: E0516 00:21:30.002950 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:30.324471 systemd[1]: Created slice kubepods-besteffort-podfe231737_936f_44e8_a9a7_10fc3f0f68fb.slice - libcontainer container kubepods-besteffort-podfe231737_936f_44e8_a9a7_10fc3f0f68fb.slice. May 16 00:21:30.408369 kubelet[2679]: I0516 00:21:30.408317 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe231737-936f-44e8-a9a7-10fc3f0f68fb-kube-proxy\") pod \"kube-proxy-vdntv\" (UID: \"fe231737-936f-44e8-a9a7-10fc3f0f68fb\") " pod="kube-system/kube-proxy-vdntv" May 16 00:21:30.408369 kubelet[2679]: I0516 00:21:30.408370 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe231737-936f-44e8-a9a7-10fc3f0f68fb-lib-modules\") pod \"kube-proxy-vdntv\" (UID: \"fe231737-936f-44e8-a9a7-10fc3f0f68fb\") " pod="kube-system/kube-proxy-vdntv" May 16 00:21:30.408595 kubelet[2679]: I0516 00:21:30.408396 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b522s\" (UniqueName: \"kubernetes.io/projected/fe231737-936f-44e8-a9a7-10fc3f0f68fb-kube-api-access-b522s\") pod \"kube-proxy-vdntv\" (UID: \"fe231737-936f-44e8-a9a7-10fc3f0f68fb\") " pod="kube-system/kube-proxy-vdntv" May 16 00:21:30.408595 kubelet[2679]: I0516 00:21:30.408504 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe231737-936f-44e8-a9a7-10fc3f0f68fb-xtables-lock\") pod \"kube-proxy-vdntv\" (UID: \"fe231737-936f-44e8-a9a7-10fc3f0f68fb\") " pod="kube-system/kube-proxy-vdntv" May 16 00:21:30.453822 systemd[1]: Created slice kubepods-besteffort-pod2c54f229_fdc4_4e74_aa2f_9513223f7e39.slice - libcontainer container kubepods-besteffort-pod2c54f229_fdc4_4e74_aa2f_9513223f7e39.slice. May 16 00:21:30.509778 kubelet[2679]: I0516 00:21:30.509728 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c54f229-fdc4-4e74-aa2f-9513223f7e39-var-lib-calico\") pod \"tigera-operator-844669ff44-lx9tx\" (UID: \"2c54f229-fdc4-4e74-aa2f-9513223f7e39\") " pod="tigera-operator/tigera-operator-844669ff44-lx9tx" May 16 00:21:30.509963 kubelet[2679]: I0516 00:21:30.509811 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ms8\" (UniqueName: \"kubernetes.io/projected/2c54f229-fdc4-4e74-aa2f-9513223f7e39-kube-api-access-p6ms8\") pod \"tigera-operator-844669ff44-lx9tx\" (UID: \"2c54f229-fdc4-4e74-aa2f-9513223f7e39\") " pod="tigera-operator/tigera-operator-844669ff44-lx9tx" May 16 00:21:30.641876 kubelet[2679]: E0516 00:21:30.641728 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:30.642622 containerd[1507]: time="2025-05-16T00:21:30.642558780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdntv,Uid:fe231737-936f-44e8-a9a7-10fc3f0f68fb,Namespace:kube-system,Attempt:0,}" May 16 00:21:30.708338 containerd[1507]: time="2025-05-16T00:21:30.708171497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:30.708338 containerd[1507]: time="2025-05-16T00:21:30.708254725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:30.708338 containerd[1507]: time="2025-05-16T00:21:30.708274225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:30.708559 containerd[1507]: time="2025-05-16T00:21:30.708408515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:30.744979 systemd[1]: Started cri-containerd-cc2010539e2aacf78c4b90511ac70f34d62c4b692af7745908be0524fc5d4db7.scope - libcontainer container cc2010539e2aacf78c4b90511ac70f34d62c4b692af7745908be0524fc5d4db7. May 16 00:21:30.757053 containerd[1507]: time="2025-05-16T00:21:30.756965675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-lx9tx,Uid:2c54f229-fdc4-4e74-aa2f-9513223f7e39,Namespace:tigera-operator,Attempt:0,}" May 16 00:21:30.777534 containerd[1507]: time="2025-05-16T00:21:30.777468715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdntv,Uid:fe231737-936f-44e8-a9a7-10fc3f0f68fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc2010539e2aacf78c4b90511ac70f34d62c4b692af7745908be0524fc5d4db7\"" May 16 00:21:30.778592 kubelet[2679]: E0516 00:21:30.778560 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:30.781249 containerd[1507]: time="2025-05-16T00:21:30.781181633Z" level=info msg="CreateContainer within sandbox \"cc2010539e2aacf78c4b90511ac70f34d62c4b692af7745908be0524fc5d4db7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:21:30.803980 containerd[1507]: time="2025-05-16T00:21:30.803628253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:30.803980 containerd[1507]: time="2025-05-16T00:21:30.803744467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:30.803980 containerd[1507]: time="2025-05-16T00:21:30.803759087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:30.804420 containerd[1507]: time="2025-05-16T00:21:30.803903498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:30.806151 containerd[1507]: time="2025-05-16T00:21:30.806091441Z" level=info msg="CreateContainer within sandbox \"cc2010539e2aacf78c4b90511ac70f34d62c4b692af7745908be0524fc5d4db7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d03ff1652f49c7d562d4259418a1630c2ae2a6eded978839b9b5e547419910bb\"" May 16 00:21:30.807132 containerd[1507]: time="2025-05-16T00:21:30.807067930Z" level=info msg="StartContainer for \"d03ff1652f49c7d562d4259418a1630c2ae2a6eded978839b9b5e547419910bb\"" May 16 00:21:30.848961 systemd[1]: Started cri-containerd-29e6b348df5924f291264e0249fc26e9b8c98bbcc1126835673473946e9b5c8d.scope - libcontainer container 29e6b348df5924f291264e0249fc26e9b8c98bbcc1126835673473946e9b5c8d. May 16 00:21:30.853613 systemd[1]: Started cri-containerd-d03ff1652f49c7d562d4259418a1630c2ae2a6eded978839b9b5e547419910bb.scope - libcontainer container d03ff1652f49c7d562d4259418a1630c2ae2a6eded978839b9b5e547419910bb. May 16 00:21:30.904426 containerd[1507]: time="2025-05-16T00:21:30.904257411Z" level=info msg="StartContainer for \"d03ff1652f49c7d562d4259418a1630c2ae2a6eded978839b9b5e547419910bb\" returns successfully" May 16 00:21:30.905847 containerd[1507]: time="2025-05-16T00:21:30.904402704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-lx9tx,Uid:2c54f229-fdc4-4e74-aa2f-9513223f7e39,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"29e6b348df5924f291264e0249fc26e9b8c98bbcc1126835673473946e9b5c8d\"" May 16 00:21:30.907252 containerd[1507]: time="2025-05-16T00:21:30.907173442Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 00:21:31.004997 kubelet[2679]: E0516 00:21:31.004958 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:31.009289 kubelet[2679]: E0516 00:21:31.009256 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:31.170636 kubelet[2679]: I0516 00:21:31.170188 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vdntv" podStartSLOduration=5.170165659 podStartE2EDuration="5.170165659s" podCreationTimestamp="2025-05-16 00:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:31.169427972 +0000 UTC m=+9.289050658" watchObservedRunningTime="2025-05-16 00:21:31.170165659 +0000 UTC m=+9.289788325" May 16 00:21:34.895866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589720631.mount: Deactivated successfully. May 16 00:21:36.671393 containerd[1507]: time="2025-05-16T00:21:36.671317168Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.699992 containerd[1507]: time="2025-05-16T00:21:36.699885766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 16 00:21:36.755266 containerd[1507]: time="2025-05-16T00:21:36.755169764Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.814066 containerd[1507]: time="2025-05-16T00:21:36.813986601Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:36.814868 containerd[1507]: time="2025-05-16T00:21:36.814799726Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 5.907587768s" May 16 00:21:36.814868 containerd[1507]: time="2025-05-16T00:21:36.814868189Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 16 00:21:36.822978 containerd[1507]: time="2025-05-16T00:21:36.822772370Z" level=info msg="CreateContainer within sandbox \"29e6b348df5924f291264e0249fc26e9b8c98bbcc1126835673473946e9b5c8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 00:21:37.293508 containerd[1507]: time="2025-05-16T00:21:37.293405513Z" level=info msg="CreateContainer within sandbox \"29e6b348df5924f291264e0249fc26e9b8c98bbcc1126835673473946e9b5c8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aab03d170991e01e074ad80e5b4532373375a31f11c996e47cfd86c7ba2c4f25\"" May 16 00:21:37.294177 containerd[1507]: time="2025-05-16T00:21:37.294141944Z" level=info msg="StartContainer for \"aab03d170991e01e074ad80e5b4532373375a31f11c996e47cfd86c7ba2c4f25\"" May 16 00:21:37.330880 systemd[1]: Started cri-containerd-aab03d170991e01e074ad80e5b4532373375a31f11c996e47cfd86c7ba2c4f25.scope - libcontainer container aab03d170991e01e074ad80e5b4532373375a31f11c996e47cfd86c7ba2c4f25. May 16 00:21:37.488773 containerd[1507]: time="2025-05-16T00:21:37.488667559Z" level=info msg="StartContainer for \"aab03d170991e01e074ad80e5b4532373375a31f11c996e47cfd86c7ba2c4f25\" returns successfully" May 16 00:21:54.011262 sudo[1698]: pam_unix(sudo:session): session closed for user root May 16 00:21:54.012912 sshd[1697]: Connection closed by 10.0.0.1 port 46984 May 16 00:21:54.014226 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 16 00:21:54.019171 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:46984.service: Deactivated successfully. May 16 00:21:54.021538 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:21:54.022024 systemd[1]: session-9.scope: Consumed 5.535s CPU time, 155.9M memory peak, 0B memory swap peak. May 16 00:21:54.022646 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. May 16 00:21:54.023930 systemd-logind[1486]: Removed session 9.