May 13 00:24:07.887466 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:24:07.887487 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:24:07.887499 kernel: BIOS-provided physical RAM map: May 13 00:24:07.887505 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:24:07.887511 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:24:07.887517 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:24:07.887524 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:24:07.887530 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:24:07.887536 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:24:07.887541 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:24:07.887602 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:24:07.887608 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 13 00:24:07.887614 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 13 00:24:07.887620 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 13 00:24:07.887628 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:24:07.887635 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:24:07.887647 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:24:07.887653 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:24:07.887660 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:24:07.887666 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:24:07.887673 kernel: NX (Execute Disable) protection: active May 13 00:24:07.887679 kernel: APIC: Static calls initialized May 13 00:24:07.887685 kernel: efi: EFI v2.7 by EDK II May 13 00:24:07.887692 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 13 00:24:07.887698 kernel: SMBIOS 2.8 present. May 13 00:24:07.887704 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:24:07.887711 kernel: Hypervisor detected: KVM May 13 00:24:07.887719 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:24:07.887726 kernel: kvm-clock: using sched offset of 3976299872 cycles May 13 00:24:07.887732 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:24:07.887739 kernel: tsc: Detected 2794.748 MHz processor May 13 00:24:07.887746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:24:07.887753 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:24:07.887760 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:24:07.887767 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 00:24:07.887773 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:24:07.887782 kernel: Using GB pages for direct mapping May 13 00:24:07.887788 kernel: Secure boot disabled May 13 00:24:07.887795 kernel: ACPI: Early table checksum verification disabled May 13 00:24:07.887802 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:24:07.887812 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:24:07.887819 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887826 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887835 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:24:07.887842 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887849 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887856 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887863 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:24:07.887870 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:24:07.887877 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:24:07.887886 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:24:07.887893 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:24:07.887900 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:24:07.887906 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:24:07.887913 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:24:07.887920 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:24:07.887927 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:24:07.887934 kernel: No NUMA configuration found May 13 00:24:07.887941 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:24:07.887947 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:24:07.887960 kernel: Zone ranges: May 13 00:24:07.887967 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:24:07.887974 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:24:07.887981 kernel: Normal empty May 13 00:24:07.887987 kernel: Movable zone start for each node May 13 00:24:07.887994 kernel: Early memory node ranges May 13 00:24:07.888001 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:24:07.888008 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:24:07.888014 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:24:07.888024 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:24:07.888030 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:24:07.888037 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:24:07.888044 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:24:07.888051 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:24:07.888058 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:24:07.888064 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:24:07.888071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:24:07.888078 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:24:07.888087 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:24:07.888094 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:24:07.888101 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:24:07.888107 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:24:07.888114 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:24:07.888121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:24:07.888128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:24:07.888135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:24:07.888142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:24:07.888148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:24:07.888158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:24:07.888165 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:24:07.888171 kernel: TSC deadline timer available May 13 00:24:07.888178 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:24:07.888185 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 00:24:07.888192 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:24:07.888198 kernel: kvm-guest: setup PV sched yield May 13 00:24:07.888205 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:24:07.888212 kernel: Booting paravirtualized kernel on KVM May 13 00:24:07.888221 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:24:07.888228 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 00:24:07.888235 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 00:24:07.888242 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 00:24:07.888258 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:24:07.888266 kernel: kvm-guest: PV spinlocks enabled May 13 00:24:07.888273 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:24:07.888281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:24:07.888291 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:24:07.888297 kernel: random: crng init done May 13 00:24:07.888304 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:24:07.888311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:24:07.888318 kernel: Fallback order for Node 0: 0 May 13 00:24:07.888325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:24:07.888332 kernel: Policy zone: DMA32 May 13 00:24:07.888338 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:24:07.888346 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 13 00:24:07.888355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:24:07.888362 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:24:07.888369 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:24:07.888376 kernel: Dynamic Preempt: voluntary May 13 00:24:07.888390 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:24:07.888400 kernel: rcu: RCU event tracing is enabled. May 13 00:24:07.888407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:24:07.888414 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:24:07.888421 kernel: Rude variant of Tasks RCU enabled. May 13 00:24:07.888428 kernel: Tracing variant of Tasks RCU enabled. May 13 00:24:07.888435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:24:07.888443 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:24:07.888452 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:24:07.888459 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:24:07.888466 kernel: Console: colour dummy device 80x25 May 13 00:24:07.888473 kernel: printk: console [ttyS0] enabled May 13 00:24:07.888481 kernel: ACPI: Core revision 20230628 May 13 00:24:07.888490 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:24:07.888497 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:24:07.888504 kernel: x2apic enabled May 13 00:24:07.888511 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:24:07.888518 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 00:24:07.888526 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 00:24:07.888533 kernel: kvm-guest: setup PV IPIs May 13 00:24:07.888540 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:24:07.888557 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:24:07.888567 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:24:07.888574 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:24:07.888582 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:24:07.888589 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:24:07.888596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:24:07.888603 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:24:07.888610 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:24:07.888617 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:24:07.888625 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:24:07.888634 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:24:07.888641 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:24:07.888649 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 00:24:07.888656 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 00:24:07.888664 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 00:24:07.888671 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:24:07.888678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:24:07.888685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:24:07.888694 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:24:07.888702 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:24:07.888709 kernel: Freeing SMP alternatives memory: 32K May 13 00:24:07.888716 kernel: pid_max: default: 32768 minimum: 301 May 13 00:24:07.888723 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:24:07.888730 kernel: landlock: Up and running. May 13 00:24:07.888737 kernel: SELinux: Initializing. May 13 00:24:07.888744 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:24:07.888751 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:24:07.888761 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:24:07.888768 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:24:07.888775 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:24:07.888783 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:24:07.888790 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:24:07.888797 kernel: ... version: 0 May 13 00:24:07.888804 kernel: ... bit width: 48 May 13 00:24:07.888811 kernel: ... generic registers: 6 May 13 00:24:07.888818 kernel: ... value mask: 0000ffffffffffff May 13 00:24:07.888828 kernel: ... max period: 00007fffffffffff May 13 00:24:07.888835 kernel: ... fixed-purpose events: 0 May 13 00:24:07.888842 kernel: ... event mask: 000000000000003f May 13 00:24:07.888849 kernel: signal: max sigframe size: 1776 May 13 00:24:07.888856 kernel: rcu: Hierarchical SRCU implementation. May 13 00:24:07.888863 kernel: rcu: Max phase no-delay instances is 400. May 13 00:24:07.888870 kernel: smp: Bringing up secondary CPUs ... May 13 00:24:07.888877 kernel: smpboot: x86: Booting SMP configuration: May 13 00:24:07.888884 kernel: .... node #0, CPUs: #1 #2 #3 May 13 00:24:07.888893 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:24:07.888900 kernel: smpboot: Max logical packages: 1 May 13 00:24:07.888908 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:24:07.888915 kernel: devtmpfs: initialized May 13 00:24:07.888922 kernel: x86/mm: Memory block size: 128MB May 13 00:24:07.888929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:24:07.888936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:24:07.888944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:24:07.888951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:24:07.888960 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:24:07.888968 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:24:07.888975 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:24:07.888982 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:24:07.888989 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:24:07.888996 kernel: audit: initializing netlink subsys (disabled) May 13 00:24:07.889003 kernel: audit: type=2000 audit(1747095847.493:1): state=initialized audit_enabled=0 res=1 May 13 00:24:07.889010 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:24:07.889017 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:24:07.889027 kernel: cpuidle: using governor menu May 13 00:24:07.889034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:24:07.889041 kernel: dca service started, version 1.12.1 May 13 00:24:07.889048 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:24:07.889055 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 00:24:07.889062 kernel: PCI: Using configuration type 1 for base access May 13 00:24:07.889070 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:24:07.889077 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:24:07.889084 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:24:07.889093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:24:07.889101 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:24:07.889108 kernel: ACPI: Added _OSI(Module Device) May 13 00:24:07.889123 kernel: ACPI: Added _OSI(Processor Device) May 13 00:24:07.889131 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:24:07.889138 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:24:07.889152 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:24:07.889166 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:24:07.889173 kernel: ACPI: Interpreter enabled May 13 00:24:07.889196 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:24:07.889210 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:24:07.889225 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:24:07.889254 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:24:07.889273 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:24:07.889289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:24:07.889482 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:24:07.889666 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:24:07.889794 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:24:07.889804 kernel: PCI host bridge to bus 0000:00 May 13 00:24:07.889928 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:24:07.890039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:24:07.890147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:24:07.890264 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:24:07.890375 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:24:07.890514 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:24:07.890684 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:24:07.890861 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:24:07.891002 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:24:07.891124 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:24:07.891252 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:24:07.891380 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:24:07.891500 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:24:07.891636 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:24:07.891773 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:24:07.891895 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:24:07.892051 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:24:07.892179 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:24:07.892327 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:24:07.892454 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:24:07.892591 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:24:07.892714 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:24:07.892842 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:24:07.892987 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:24:07.893113 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:24:07.893238 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:24:07.893370 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:24:07.893497 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:24:07.893636 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:24:07.893764 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:24:07.893885 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:24:07.894004 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:24:07.894136 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:24:07.894265 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:24:07.894276 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:24:07.894283 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:24:07.894291 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:24:07.894298 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:24:07.894305 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:24:07.894312 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:24:07.894323 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:24:07.894330 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:24:07.894340 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:24:07.894350 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:24:07.894360 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:24:07.894370 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:24:07.894379 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:24:07.894389 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:24:07.894399 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:24:07.894412 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:24:07.894419 kernel: iommu: Default domain type: Translated May 13 00:24:07.894426 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:24:07.894433 kernel: efivars: Registered efivars operations May 13 00:24:07.894440 kernel: PCI: Using ACPI for IRQ routing May 13 00:24:07.894448 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:24:07.894455 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:24:07.894462 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:24:07.894471 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:24:07.894478 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:24:07.894621 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:24:07.894743 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:24:07.894864 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:24:07.894874 kernel: vgaarb: loaded May 13 00:24:07.894882 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:24:07.894889 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:24:07.894896 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:24:07.894908 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:24:07.894915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:24:07.894922 kernel: pnp: PnP ACPI init May 13 00:24:07.895051 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:24:07.895062 kernel: pnp: PnP ACPI: found 6 devices May 13 00:24:07.895070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:24:07.895077 kernel: NET: Registered PF_INET protocol family May 13 00:24:07.895084 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:24:07.895095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:24:07.895102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:24:07.895110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:24:07.895117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:24:07.895124 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:24:07.895131 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:24:07.895138 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:24:07.895146 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:24:07.895153 kernel: NET: Registered PF_XDP protocol family May 13 00:24:07.895287 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:24:07.895409 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:24:07.895572 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:24:07.895699 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:24:07.895809 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:24:07.895919 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:24:07.896029 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:24:07.896139 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:24:07.896153 kernel: PCI: CLS 0 bytes, default 64 May 13 00:24:07.896161 kernel: Initialise system trusted keyrings May 13 00:24:07.896168 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:24:07.896176 kernel: Key type asymmetric registered May 13 00:24:07.896183 kernel: Asymmetric key parser 'x509' registered May 13 00:24:07.896190 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:24:07.896198 kernel: io scheduler mq-deadline registered May 13 00:24:07.896205 kernel: io scheduler kyber registered May 13 00:24:07.896212 kernel: io scheduler bfq registered May 13 00:24:07.896222 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:24:07.896230 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:24:07.896237 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:24:07.896253 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:24:07.896260 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:24:07.896269 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:24:07.896276 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:24:07.896283 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:24:07.896291 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:24:07.896424 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:24:07.896436 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:24:07.896561 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:24:07.896678 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:24:07 UTC (1747095847) May 13 00:24:07.896790 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:24:07.896800 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 00:24:07.896807 kernel: efifb: probing for efifb May 13 00:24:07.896815 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 13 00:24:07.896826 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 13 00:24:07.896833 kernel: efifb: scrolling: redraw May 13 00:24:07.896841 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 13 00:24:07.896848 kernel: Console: switching to colour frame buffer device 100x37 May 13 00:24:07.896855 kernel: fb0: EFI VGA frame buffer device May 13 00:24:07.896880 kernel: pstore: Using crash dump compression: deflate May 13 00:24:07.896891 kernel: pstore: Registered efi_pstore as persistent store backend May 13 00:24:07.896898 kernel: NET: Registered PF_INET6 protocol family May 13 00:24:07.896906 kernel: Segment Routing with IPv6 May 13 00:24:07.896916 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:24:07.896923 kernel: NET: Registered PF_PACKET protocol family May 13 00:24:07.896931 kernel: Key type dns_resolver registered May 13 00:24:07.896938 kernel: IPI shorthand broadcast: enabled May 13 00:24:07.896946 kernel: sched_clock: Marking stable (585003354, 114631560)->(714654157, -15019243) May 13 00:24:07.896953 kernel: registered taskstats version 1 May 13 00:24:07.896960 kernel: Loading compiled-in X.509 certificates May 13 00:24:07.896968 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:24:07.896976 kernel: Key type .fscrypt registered May 13 00:24:07.896985 kernel: Key type fscrypt-provisioning registered May 13 00:24:07.896993 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:24:07.897000 kernel: ima: Allocated hash algorithm: sha1 May 13 00:24:07.897008 kernel: ima: No architecture policies found May 13 00:24:07.897015 kernel: clk: Disabling unused clocks May 13 00:24:07.897023 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:24:07.897030 kernel: Write protecting the kernel read-only data: 36864k May 13 00:24:07.897038 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:24:07.897048 kernel: Run /init as init process May 13 00:24:07.897055 kernel: with arguments: May 13 00:24:07.897062 kernel: /init May 13 00:24:07.897070 kernel: with environment: May 13 00:24:07.897077 kernel: HOME=/ May 13 00:24:07.897085 kernel: TERM=linux May 13 00:24:07.897092 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:24:07.897102 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:24:07.897114 systemd[1]: Detected virtualization kvm. May 13 00:24:07.897122 systemd[1]: Detected architecture x86-64. May 13 00:24:07.897130 systemd[1]: Running in initrd. May 13 00:24:07.897138 systemd[1]: No hostname configured, using default hostname. May 13 00:24:07.897148 systemd[1]: Hostname set to . May 13 00:24:07.897158 systemd[1]: Initializing machine ID from VM UUID. May 13 00:24:07.897166 systemd[1]: Queued start job for default target initrd.target. May 13 00:24:07.897175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:24:07.897183 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:24:07.897191 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:24:07.897200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:24:07.897208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:24:07.897217 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:24:07.897229 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:24:07.897237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:24:07.897254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:24:07.897263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:24:07.897271 systemd[1]: Reached target paths.target - Path Units. May 13 00:24:07.897279 systemd[1]: Reached target slices.target - Slice Units. May 13 00:24:07.897287 systemd[1]: Reached target swap.target - Swaps. May 13 00:24:07.897297 systemd[1]: Reached target timers.target - Timer Units. May 13 00:24:07.897306 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:24:07.897316 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:24:07.897324 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:24:07.897332 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:24:07.897340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:24:07.897348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:24:07.897356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:24:07.897364 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:24:07.897375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:24:07.897383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:24:07.897391 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:24:07.897399 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:24:07.897407 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:24:07.897415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:24:07.897423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:24:07.897431 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:24:07.897442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:24:07.897450 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:24:07.897477 systemd-journald[192]: Collecting audit messages is disabled. May 13 00:24:07.897498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:24:07.897507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:24:07.897515 systemd-journald[192]: Journal started May 13 00:24:07.897535 systemd-journald[192]: Runtime Journal (/run/log/journal/00d9efad80874c9082e5934a936f6605) is 6.0M, max 48.3M, 42.2M free. May 13 00:24:07.890923 systemd-modules-load[193]: Inserted module 'overlay' May 13 00:24:07.900563 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:24:07.909717 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:24:07.913116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:24:07.916592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:07.921082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:24:07.919199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:24:07.924561 kernel: Bridge firewalling registered May 13 00:24:07.924603 systemd-modules-load[193]: Inserted module 'br_netfilter' May 13 00:24:07.925520 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:24:07.927875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:24:07.931503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:24:07.937939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:24:07.940949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:24:07.942878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:24:07.947158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:24:07.949393 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:24:07.962794 dracut-cmdline[229]: dracut-dracut-053 May 13 00:24:07.965112 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:24:07.980875 systemd-resolved[227]: Positive Trust Anchors: May 13 00:24:07.980895 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:24:07.980935 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:24:07.984041 systemd-resolved[227]: Defaulting to hostname 'linux'. May 13 00:24:07.985281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:24:07.991906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:24:08.036578 kernel: SCSI subsystem initialized May 13 00:24:08.046572 kernel: Loading iSCSI transport class v2.0-870. May 13 00:24:08.056571 kernel: iscsi: registered transport (tcp) May 13 00:24:08.076757 kernel: iscsi: registered transport (qla4xxx) May 13 00:24:08.076777 kernel: QLogic iSCSI HBA Driver May 13 00:24:08.118198 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:24:08.127682 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:24:08.154606 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:24:08.154639 kernel: device-mapper: uevent: version 1.0.3 May 13 00:24:08.155803 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:24:08.195577 kernel: raid6: avx2x4 gen() 30133 MB/s May 13 00:24:08.212571 kernel: raid6: avx2x2 gen() 31233 MB/s May 13 00:24:08.229659 kernel: raid6: avx2x1 gen() 25921 MB/s May 13 00:24:08.229678 kernel: raid6: using algorithm avx2x2 gen() 31233 MB/s May 13 00:24:08.247669 kernel: raid6: .... xor() 19902 MB/s, rmw enabled May 13 00:24:08.247694 kernel: raid6: using avx2x2 recovery algorithm May 13 00:24:08.267573 kernel: xor: automatically using best checksumming function avx May 13 00:24:08.418580 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:24:08.429456 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:24:08.441835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:24:08.453857 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 13 00:24:08.458649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:24:08.471715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:24:08.483793 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 13 00:24:08.512029 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:24:08.519725 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:24:08.580326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:24:08.591721 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:24:08.601029 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:24:08.602869 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:24:08.604755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:24:08.609398 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:24:08.614611 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 00:24:08.617666 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:24:08.619793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:24:08.631428 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:24:08.631472 kernel: GPT:9289727 != 19775487 May 13 00:24:08.631486 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:24:08.631499 kernel: GPT:9289727 != 19775487 May 13 00:24:08.631511 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:24:08.631524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:24:08.634330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:24:08.638634 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:24:08.650115 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:24:08.650196 kernel: AES CTR mode by8 optimization enabled May 13 00:24:08.660080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:24:08.662378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:24:08.666669 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:24:08.675882 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) May 13 00:24:08.675908 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (462) May 13 00:24:08.675923 kernel: libata version 3.00 loaded. May 13 00:24:08.671644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:24:08.671884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:08.674680 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:24:08.685988 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:24:08.686202 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:24:08.687860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:24:08.692338 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:24:08.692528 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:24:08.692736 kernel: scsi host0: ahci May 13 00:24:08.692931 kernel: scsi host1: ahci May 13 00:24:08.693120 kernel: scsi host2: ahci May 13 00:24:08.694577 kernel: scsi host3: ahci May 13 00:24:08.697851 kernel: scsi host4: ahci May 13 00:24:08.698074 kernel: scsi host5: ahci May 13 00:24:08.700765 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 00:24:08.700786 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 00:24:08.700796 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 00:24:08.701624 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 00:24:08.703360 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 00:24:08.703386 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 00:24:08.707389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:24:08.710561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:08.718735 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:24:08.732447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:24:08.733698 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:24:08.741779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:24:08.751666 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:24:08.753434 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:24:08.761385 disk-uuid[565]: Primary Header is updated. May 13 00:24:08.761385 disk-uuid[565]: Secondary Entries is updated. May 13 00:24:08.761385 disk-uuid[565]: Secondary Header is updated. May 13 00:24:08.765581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:24:08.770576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:24:08.774101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:24:09.016406 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:24:09.016478 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:24:09.016502 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:24:09.016512 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:24:09.017569 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:24:09.018577 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:24:09.019587 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:24:09.019610 kernel: ata3.00: applying bridge limits May 13 00:24:09.020573 kernel: ata3.00: configured for UDMA/100 May 13 00:24:09.022577 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:24:09.066120 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:24:09.066367 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:24:09.078574 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:24:09.772434 disk-uuid[567]: The operation has completed successfully. May 13 00:24:09.774025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:24:09.806334 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:24:09.806465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:24:09.833873 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:24:09.838165 sh[591]: Success May 13 00:24:09.851574 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:24:09.887788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:24:09.900101 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:24:09.903381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:24:09.917284 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:24:09.917342 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:24:09.917353 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:24:09.918325 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:24:09.919698 kernel: BTRFS info (device dm-0): using free space tree May 13 00:24:09.923781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:24:09.924891 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:24:09.934725 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:24:09.937989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:24:09.946611 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:24:09.946649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:24:09.946663 kernel: BTRFS info (device vda6): using free space tree May 13 00:24:09.949580 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:24:09.958281 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:24:09.960066 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:24:09.970711 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:24:09.979723 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:24:10.027896 ignition[682]: Ignition 2.19.0 May 13 00:24:10.027911 ignition[682]: Stage: fetch-offline May 13 00:24:10.027945 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 13 00:24:10.027955 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:10.028035 ignition[682]: parsed url from cmdline: "" May 13 00:24:10.028039 ignition[682]: no config URL provided May 13 00:24:10.028044 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:24:10.028053 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 13 00:24:10.028080 ignition[682]: op(1): [started] loading QEMU firmware config module May 13 00:24:10.028085 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:24:10.039724 ignition[682]: op(1): [finished] loading QEMU firmware config module May 13 00:24:10.039747 ignition[682]: QEMU firmware config was not found. Ignoring... May 13 00:24:10.049481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:24:10.056344 ignition[682]: parsing config with SHA512: c534d51b27d0658da8f69ae810298f50f110d23fa624b58ff02b5097a389df59e08e2028284cf24c6a8bb0eab4b991700806931f216c785861f8f7ce29fd7d9c May 13 00:24:10.059735 unknown[682]: fetched base config from "system" May 13 00:24:10.059748 unknown[682]: fetched user config from "qemu" May 13 00:24:10.060106 ignition[682]: fetch-offline: fetch-offline passed May 13 00:24:10.060163 ignition[682]: Ignition finished successfully May 13 00:24:10.061767 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:24:10.066734 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:24:10.084356 systemd-networkd[780]: lo: Link UP May 13 00:24:10.084367 systemd-networkd[780]: lo: Gained carrier May 13 00:24:10.086362 systemd-networkd[780]: Enumeration completed May 13 00:24:10.087005 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:24:10.087010 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:24:10.088244 systemd-networkd[780]: eth0: Link UP May 13 00:24:10.088248 systemd-networkd[780]: eth0: Gained carrier May 13 00:24:10.088257 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:24:10.089768 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:24:10.093706 systemd[1]: Reached target network.target - Network. May 13 00:24:10.097914 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:24:10.106623 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:24:10.109688 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:24:10.125625 ignition[784]: Ignition 2.19.0 May 13 00:24:10.125637 ignition[784]: Stage: kargs May 13 00:24:10.125847 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 13 00:24:10.125861 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:10.126947 ignition[784]: kargs: kargs passed May 13 00:24:10.126999 ignition[784]: Ignition finished successfully May 13 00:24:10.133289 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:24:10.144805 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:24:10.156903 ignition[793]: Ignition 2.19.0 May 13 00:24:10.156914 ignition[793]: Stage: disks May 13 00:24:10.157077 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 13 00:24:10.157093 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:10.157908 ignition[793]: disks: disks passed May 13 00:24:10.157952 ignition[793]: Ignition finished successfully May 13 00:24:10.163603 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:24:10.164138 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:24:10.166078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:24:10.166422 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:24:10.166927 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:24:10.167268 systemd[1]: Reached target basic.target - Basic System. May 13 00:24:10.184749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:24:10.199130 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:24:10.205603 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:24:10.215650 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:24:10.304575 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:24:10.305484 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:24:10.306504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:24:10.317629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:24:10.319250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:24:10.320808 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:24:10.320846 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:24:10.333214 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) May 13 00:24:10.333239 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:24:10.333250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:24:10.333260 kernel: BTRFS info (device vda6): using free space tree May 13 00:24:10.320866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:24:10.336566 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:24:10.327414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:24:10.334021 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:24:10.338337 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:24:10.376222 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:24:10.379878 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory May 13 00:24:10.385147 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:24:10.393792 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:24:10.673056 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:24:10.697695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:24:10.704209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:24:10.731335 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:24:10.832388 ignition[926]: INFO : Ignition 2.19.0 May 13 00:24:10.837635 ignition[926]: INFO : Stage: mount May 13 00:24:10.837635 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:24:10.837635 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:10.837635 ignition[926]: INFO : mount: mount passed May 13 00:24:10.837635 ignition[926]: INFO : Ignition finished successfully May 13 00:24:10.844564 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:24:10.862111 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:24:10.870557 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:24:10.917165 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:24:10.942067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:24:10.963497 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) May 13 00:24:10.966384 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:24:10.966425 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:24:10.966440 kernel: BTRFS info (device vda6): using free space tree May 13 00:24:10.970582 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:24:10.972817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:24:11.023586 ignition[956]: INFO : Ignition 2.19.0 May 13 00:24:11.023586 ignition[956]: INFO : Stage: files May 13 00:24:11.027842 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:24:11.027842 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:11.027842 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 13 00:24:11.032490 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:24:11.032490 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:24:11.045786 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:24:11.047574 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:24:11.058251 unknown[956]: wrote ssh authorized keys file for user: core May 13 00:24:11.064361 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:24:11.075212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:24:11.075212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:24:11.075212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:24:11.075212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:24:11.131150 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:24:11.289309 systemd-networkd[780]: eth0: Gained IPv6LL May 13 00:24:11.510408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:24:11.510408 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:24:11.520010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:24:12.017607 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:24:12.580523 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:24:12.580523 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:24:12.584865 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:24:12.584865 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 13 00:24:12.584865 ignition[956]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:24:12.626951 ignition[956]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:24:12.631624 ignition[956]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:24:12.633389 ignition[956]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:24:12.633389 ignition[956]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 13 00:24:12.633389 ignition[956]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:24:12.633389 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:24:12.633389 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:24:12.633389 ignition[956]: INFO : files: files passed May 13 00:24:12.633389 ignition[956]: INFO : Ignition finished successfully May 13 00:24:12.645177 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:24:12.664856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:24:12.666926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:24:12.668862 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:24:12.668975 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:24:12.676914 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:24:12.680470 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:24:12.682200 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:24:12.685227 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:24:12.682398 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:24:12.686010 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:24:12.698708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:24:12.723862 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:24:12.724033 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:24:12.726421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:24:12.728492 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:24:12.730510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:24:12.731300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:24:12.749688 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:24:12.758699 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:24:12.768699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:24:12.769977 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:24:12.772211 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:24:12.774249 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:24:12.774379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:24:12.776570 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:24:12.778286 systemd[1]: Stopped target basic.target - Basic System. May 13 00:24:12.780311 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:24:12.782398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:24:12.784471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:24:12.786611 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:24:12.788728 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:24:12.791529 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:24:12.793475 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:24:12.796075 systemd[1]: Stopped target swap.target - Swaps. May 13 00:24:12.797893 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:24:12.798040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:24:12.800253 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:24:12.801959 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:24:12.804354 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:24:12.804506 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:24:12.806792 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:24:12.806899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:24:12.809325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:24:12.809436 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:24:12.811603 systemd[1]: Stopped target paths.target - Path Units. May 13 00:24:12.813378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:24:12.817670 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:24:12.819070 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:24:12.821134 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:24:12.823123 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:24:12.823225 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:24:12.825175 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:24:12.825304 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:24:12.827648 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:24:12.827802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:24:12.829883 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:24:12.829984 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:24:12.838712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:24:12.840402 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:24:12.841733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:24:12.841848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:24:12.844458 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:24:12.844669 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:24:12.850040 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:24:12.851621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:24:12.856012 ignition[1010]: INFO : Ignition 2.19.0 May 13 00:24:12.856012 ignition[1010]: INFO : Stage: umount May 13 00:24:12.856012 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:24:12.856012 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:24:12.856012 ignition[1010]: INFO : umount: umount passed May 13 00:24:12.856012 ignition[1010]: INFO : Ignition finished successfully May 13 00:24:12.859700 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:24:12.859834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:24:12.861798 systemd[1]: Stopped target network.target - Network. May 13 00:24:12.863485 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:24:12.863539 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:24:12.865734 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:24:12.865793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:24:12.868077 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:24:12.868144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:24:12.870378 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:24:12.870437 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:24:12.872724 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:24:12.875078 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:24:12.876619 systemd-networkd[780]: eth0: DHCPv6 lease lost May 13 00:24:12.878374 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:24:12.878932 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:24:12.879061 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:24:12.881194 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:24:12.881257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:24:12.890680 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:24:12.892725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:24:12.892803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:24:12.895369 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:24:12.898213 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:24:12.898363 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:24:12.904422 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:24:12.904539 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:24:12.909680 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:24:12.909740 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:24:12.916053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:24:12.916132 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:24:12.921375 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:24:12.921503 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:24:12.964741 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:24:12.964929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:24:12.965488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:24:12.965536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:24:12.968800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:24:12.968841 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:24:12.983263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:24:12.983314 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:24:12.985828 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:24:12.985876 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:24:12.986532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:24:12.986590 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:24:13.002709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:24:13.003223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:24:13.003297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:24:13.005943 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:24:13.006004 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:24:13.008424 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:24:13.008488 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:24:13.011399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:24:13.011458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:13.023226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:24:13.023354 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:24:13.207111 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:24:13.207247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:24:13.210234 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:24:13.212318 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:24:13.212371 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:24:13.222676 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:24:13.231244 systemd[1]: Switching root. May 13 00:24:13.267324 systemd-journald[192]: Journal stopped May 13 00:24:15.135412 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 13 00:24:15.135481 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:24:15.135495 kernel: SELinux: policy capability open_perms=1 May 13 00:24:15.135507 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:24:15.135522 kernel: SELinux: policy capability always_check_network=0 May 13 00:24:15.135539 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:24:15.135597 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:24:15.135609 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:24:15.135620 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:24:15.135631 kernel: audit: type=1403 audit(1747095854.064:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:24:15.135644 systemd[1]: Successfully loaded SELinux policy in 39.897ms. May 13 00:24:15.135696 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.380ms. May 13 00:24:15.135710 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:24:15.135723 systemd[1]: Detected virtualization kvm. May 13 00:24:15.135737 systemd[1]: Detected architecture x86-64. May 13 00:24:15.135749 systemd[1]: Detected first boot. May 13 00:24:15.135762 systemd[1]: Initializing machine ID from VM UUID. May 13 00:24:15.135775 zram_generator::config[1074]: No configuration found. May 13 00:24:15.135788 systemd[1]: Populated /etc with preset unit settings. May 13 00:24:15.135799 systemd[1]: Queued start job for default target multi-user.target. May 13 00:24:15.135811 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:24:15.135824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:24:15.135838 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:24:15.135850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:24:15.135862 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:24:15.135874 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:24:15.135887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:24:15.135903 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:24:15.135915 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:24:15.135927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:24:15.135939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:24:15.135953 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:24:15.135966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:24:15.135978 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:24:15.135990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:24:15.136003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:24:15.136014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:24:15.136028 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:24:15.136040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:24:15.136061 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:24:15.136077 systemd[1]: Reached target slices.target - Slice Units. May 13 00:24:15.136089 systemd[1]: Reached target swap.target - Swaps. May 13 00:24:15.136101 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:24:15.136113 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:24:15.136125 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:24:15.136137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:24:15.136149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:24:15.136160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:24:15.136174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:24:15.136186 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:24:15.136198 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:24:15.136210 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:24:15.136222 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:24:15.136234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:24:15.136246 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:24:15.136258 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:24:15.136269 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:24:15.136284 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:24:15.136297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:24:15.136309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:24:15.136321 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:24:15.136333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:24:15.136345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:24:15.136357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:24:15.136369 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:24:15.136380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:24:15.136395 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:24:15.136408 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:24:15.136420 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 00:24:15.136432 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:24:15.136656 systemd-journald[1148]: Collecting audit messages is disabled. May 13 00:24:15.136703 kernel: loop: module loaded May 13 00:24:15.136719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:24:15.136731 kernel: fuse: init (API version 7.39) May 13 00:24:15.136743 systemd-journald[1148]: Journal started May 13 00:24:15.136765 systemd-journald[1148]: Runtime Journal (/run/log/journal/00d9efad80874c9082e5934a936f6605) is 6.0M, max 48.3M, 42.2M free. May 13 00:24:15.141611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:24:15.156538 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:24:15.166459 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:24:15.166533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:24:15.172762 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:24:15.172835 kernel: ACPI: bus type drm_connector registered May 13 00:24:15.176240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:24:15.177526 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:24:15.179109 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:24:15.180392 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:24:15.181703 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:24:15.222393 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:24:15.223897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:24:15.225479 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:24:15.225715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:24:15.227224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:24:15.227432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:24:15.228951 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:24:15.229177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:24:15.230582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:24:15.230799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:24:15.232502 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:24:15.232715 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:24:15.290590 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:24:15.290900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:24:15.292587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:24:15.294226 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:24:15.296025 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:24:15.298232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:24:15.312238 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:24:15.364614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:24:15.366849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:24:15.367971 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:24:15.369539 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:24:15.373577 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:24:15.374899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:24:15.377766 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:24:15.379790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:24:15.380872 systemd-journald[1148]: Time spent on flushing to /var/log/journal/00d9efad80874c9082e5934a936f6605 is 14.681ms for 979 entries. May 13 00:24:15.380872 systemd-journald[1148]: System Journal (/var/log/journal/00d9efad80874c9082e5934a936f6605) is 8.0M, max 195.6M, 187.6M free. May 13 00:24:15.843242 systemd-journald[1148]: Received client request to flush runtime journal. May 13 00:24:15.384398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:24:15.428242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:24:15.430906 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:24:15.433665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:24:15.435088 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:24:15.445714 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:24:15.573709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:24:15.575202 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 13 00:24:15.575215 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 13 00:24:15.581852 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:24:15.840579 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:24:15.842586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:24:15.846154 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:24:15.850085 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:24:15.867766 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:24:15.896266 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:24:15.909680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:24:15.926893 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 13 00:24:15.926915 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 13 00:24:15.932967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:24:16.305488 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:24:16.318857 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:24:16.352256 systemd-udevd[1235]: Using default interface naming scheme 'v255'. May 13 00:24:16.371838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:24:16.381801 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:24:16.396776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:24:16.414025 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 13 00:24:16.418924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1249) May 13 00:24:16.443911 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:24:16.486408 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 00:24:16.486089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:24:16.495570 kernel: ACPI: button: Power Button [PWRF] May 13 00:24:16.523755 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:24:16.548984 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:24:16.549164 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:24:16.549346 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:24:16.558608 systemd-networkd[1240]: lo: Link UP May 13 00:24:16.559132 systemd-networkd[1240]: lo: Gained carrier May 13 00:24:16.568183 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 13 00:24:16.566138 systemd-networkd[1240]: Enumeration completed May 13 00:24:16.566532 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:24:16.566537 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:24:16.567278 systemd-networkd[1240]: eth0: Link UP May 13 00:24:16.567282 systemd-networkd[1240]: eth0: Gained carrier May 13 00:24:16.567293 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:24:16.569807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:24:16.574165 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:24:16.582631 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:24:16.583802 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:24:16.627866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:24:16.628774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:16.633295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:24:16.637576 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:24:16.650935 kernel: kvm_amd: TSC scaling supported May 13 00:24:16.651039 kernel: kvm_amd: Nested Virtualization enabled May 13 00:24:16.651059 kernel: kvm_amd: Nested Paging enabled May 13 00:24:16.651580 kernel: kvm_amd: LBR virtualization supported May 13 00:24:16.653050 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 00:24:16.653155 kernel: kvm_amd: Virtual GIF supported May 13 00:24:16.672601 kernel: EDAC MC: Ver: 3.0.0 May 13 00:24:16.697913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:24:16.716807 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:24:16.730705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:24:16.740392 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:24:16.772343 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:24:16.774274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:24:16.782731 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:24:16.789200 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:24:16.821076 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:24:16.823049 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:24:16.824598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:24:16.824629 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:24:16.825911 systemd[1]: Reached target machines.target - Containers. May 13 00:24:16.828188 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:24:16.840724 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:24:16.843675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:24:16.845083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:24:16.846070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:24:16.850437 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:24:16.854306 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:24:16.857775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:24:16.869569 kernel: loop0: detected capacity change from 0 to 142488 May 13 00:24:16.873089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:24:16.885297 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:24:16.886138 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:24:16.898592 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:24:16.923579 kernel: loop1: detected capacity change from 0 to 140768 May 13 00:24:16.960603 kernel: loop2: detected capacity change from 0 to 210664 May 13 00:24:16.996596 kernel: loop3: detected capacity change from 0 to 142488 May 13 00:24:17.008612 kernel: loop4: detected capacity change from 0 to 140768 May 13 00:24:17.018583 kernel: loop5: detected capacity change from 0 to 210664 May 13 00:24:17.023960 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:24:17.024579 (sd-merge)[1310]: Merged extensions into '/usr'. May 13 00:24:17.028591 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:24:17.028710 systemd[1]: Reloading... May 13 00:24:17.083579 zram_generator::config[1337]: No configuration found. May 13 00:24:17.114730 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:24:17.209440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:24:17.273390 systemd[1]: Reloading finished in 244 ms. May 13 00:24:17.294610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:24:17.296190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:24:17.314681 systemd[1]: Starting ensure-sysext.service... May 13 00:24:17.316708 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:24:17.320254 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... May 13 00:24:17.320271 systemd[1]: Reloading... May 13 00:24:17.339918 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:24:17.340300 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:24:17.341291 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:24:17.341602 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. May 13 00:24:17.341683 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. May 13 00:24:17.345279 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:24:17.345292 systemd-tmpfiles[1383]: Skipping /boot May 13 00:24:17.359304 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:24:17.359404 systemd-tmpfiles[1383]: Skipping /boot May 13 00:24:17.376596 zram_generator::config[1417]: No configuration found. May 13 00:24:17.488617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:24:17.556862 systemd[1]: Reloading finished in 236 ms. May 13 00:24:17.576809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:24:17.591937 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:24:17.594969 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:24:17.597751 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:24:17.601683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:24:17.605642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:24:17.614003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:24:17.614361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:24:17.620242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:24:17.624223 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:24:17.628754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:24:17.632350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:24:17.634153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:24:17.635019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:24:17.636117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:24:17.636331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:24:17.638716 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:24:17.638987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:24:17.640639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:24:17.640890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:24:17.643026 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:24:17.643256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:24:17.646063 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:24:17.647385 augenrules[1485]: No rules May 13 00:24:17.648003 systemd[1]: Finished ensure-sysext.service. May 13 00:24:17.649362 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:24:17.660819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:24:17.661017 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:24:17.669638 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:24:17.672883 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:24:17.674887 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:24:17.678693 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:24:17.683008 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:24:17.689613 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:24:17.741556 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:24:18.318049 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:24:18.318096 systemd-timesyncd[1501]: Initial clock synchronization to Tue 2025-05-13 00:24:18.317964 UTC. May 13 00:24:18.318468 systemd-resolved[1460]: Positive Trust Anchors: May 13 00:24:18.318484 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:24:18.318516 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:24:18.318942 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:24:18.322840 systemd-resolved[1460]: Defaulting to hostname 'linux'. May 13 00:24:18.324769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:24:18.326130 systemd[1]: Reached target network.target - Network. May 13 00:24:18.327066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:24:18.328254 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:24:18.329556 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:24:18.330850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:24:18.332382 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:24:18.333624 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:24:18.334914 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:24:18.336218 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:24:18.336250 systemd[1]: Reached target paths.target - Path Units. May 13 00:24:18.337199 systemd[1]: Reached target timers.target - Timer Units. May 13 00:24:18.338736 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:24:18.341938 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:24:18.344609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:24:18.352232 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:24:18.353369 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:24:18.354341 systemd[1]: Reached target basic.target - Basic System. May 13 00:24:18.355469 systemd[1]: System is tainted: cgroupsv1 May 13 00:24:18.355509 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:24:18.355532 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:24:18.356896 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:24:18.359146 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:24:18.361167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:24:18.366048 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:24:18.367280 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:24:18.368570 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:24:18.370903 jq[1514]: false May 13 00:24:18.375018 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:24:18.379834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:24:18.385791 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:24:18.395096 extend-filesystems[1515]: Found loop3 May 13 00:24:18.395096 extend-filesystems[1515]: Found loop4 May 13 00:24:18.395096 extend-filesystems[1515]: Found loop5 May 13 00:24:18.395096 extend-filesystems[1515]: Found sr0 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda May 13 00:24:18.395096 extend-filesystems[1515]: Found vda1 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda2 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda3 May 13 00:24:18.395096 extend-filesystems[1515]: Found usr May 13 00:24:18.395096 extend-filesystems[1515]: Found vda4 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda6 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda7 May 13 00:24:18.395096 extend-filesystems[1515]: Found vda9 May 13 00:24:18.395096 extend-filesystems[1515]: Checking size of /dev/vda9 May 13 00:24:18.429727 extend-filesystems[1515]: Resized partition /dev/vda9 May 13 00:24:18.435703 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:24:18.395430 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:24:18.398566 dbus-daemon[1513]: [system] SELinux support is enabled May 13 00:24:18.438125 extend-filesystems[1544]: resize2fs 1.47.1 (20-May-2024) May 13 00:24:18.454484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1237) May 13 00:24:18.397347 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:24:18.401067 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:24:18.406513 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:24:18.454859 jq[1536]: true May 13 00:24:18.410609 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:24:18.414622 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:24:18.414956 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:24:18.415284 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:24:18.455423 jq[1545]: true May 13 00:24:18.415581 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:24:18.419405 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:24:18.419702 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:24:18.445999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:24:18.446023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:24:18.460557 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:24:18.460584 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:24:18.461577 update_engine[1533]: I20250513 00:24:18.461475 1533 main.cc:92] Flatcar Update Engine starting May 13 00:24:18.464972 update_engine[1533]: I20250513 00:24:18.464934 1533 update_check_scheduler.cc:74] Next update check in 10m36s May 13 00:24:18.474900 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:24:18.478117 tar[1543]: linux-amd64/helm May 13 00:24:18.478634 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:24:18.489345 systemd[1]: Started update-engine.service - Update Engine. May 13 00:24:18.491849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:24:18.500149 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:24:18.502516 systemd-logind[1531]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:24:18.503470 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:24:18.505474 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:24:18.505474 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:24:18.505474 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:24:18.505360 systemd-logind[1531]: New seat seat0. May 13 00:24:18.517057 extend-filesystems[1515]: Resized filesystem in /dev/vda9 May 13 00:24:18.506007 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:24:18.506389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:24:18.510618 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:24:18.524842 bash[1574]: Updated "/home/core/.ssh/authorized_keys" May 13 00:24:18.526995 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:24:18.534148 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:24:18.539817 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:24:18.584044 systemd-networkd[1240]: eth0: Gained IPv6LL May 13 00:24:18.586659 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:24:18.590099 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:24:18.602520 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:24:18.609120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:18.618592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:24:18.653554 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:24:18.653871 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:24:18.656612 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:24:18.668189 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:24:18.703173 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:24:18.712377 containerd[1552]: time="2025-05-13T00:24:18.712286812Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:24:18.737179 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:24:18.741153 containerd[1552]: time="2025-05-13T00:24:18.740966828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.742611 containerd[1552]: time="2025-05-13T00:24:18.742585424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:24:18.742668 containerd[1552]: time="2025-05-13T00:24:18.742656788Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:24:18.742737 containerd[1552]: time="2025-05-13T00:24:18.742725467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:24:18.742987 containerd[1552]: time="2025-05-13T00:24:18.742970947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743032152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743102764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743116250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743370166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743383471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743397868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743407316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743499489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.743899 containerd[1552]: time="2025-05-13T00:24:18.743746041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:24:18.744199 containerd[1552]: time="2025-05-13T00:24:18.744180676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:24:18.744244 containerd[1552]: time="2025-05-13T00:24:18.744233716Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:24:18.744396 containerd[1552]: time="2025-05-13T00:24:18.744382705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:24:18.744493 containerd[1552]: time="2025-05-13T00:24:18.744481390Z" level=info msg="metadata content store policy set" policy=shared May 13 00:24:18.744728 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:24:18.752703 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:24:18.753022 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:24:18.755811 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:24:18.847941 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:24:18.871182 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:24:18.873547 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:24:18.874816 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:24:18.911189 tar[1543]: linux-amd64/LICENSE May 13 00:24:18.911300 tar[1543]: linux-amd64/README.md May 13 00:24:18.924300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:24:19.074245 containerd[1552]: time="2025-05-13T00:24:19.074138002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:24:19.074245 containerd[1552]: time="2025-05-13T00:24:19.074245313Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:24:19.074434 containerd[1552]: time="2025-05-13T00:24:19.074266242Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:24:19.074434 containerd[1552]: time="2025-05-13T00:24:19.074298673Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:24:19.074434 containerd[1552]: time="2025-05-13T00:24:19.074319472Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:24:19.074613 containerd[1552]: time="2025-05-13T00:24:19.074572026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:24:19.075221 containerd[1552]: time="2025-05-13T00:24:19.075156292Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:24:19.075421 containerd[1552]: time="2025-05-13T00:24:19.075396472Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:24:19.075460 containerd[1552]: time="2025-05-13T00:24:19.075422802Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:24:19.075460 containerd[1552]: time="2025-05-13T00:24:19.075441166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:24:19.075511 containerd[1552]: time="2025-05-13T00:24:19.075459972Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075511 containerd[1552]: time="2025-05-13T00:24:19.075489707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075511 containerd[1552]: time="2025-05-13T00:24:19.075507200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075585 containerd[1552]: time="2025-05-13T00:24:19.075527378Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075585 containerd[1552]: time="2025-05-13T00:24:19.075546965Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075585 containerd[1552]: time="2025-05-13T00:24:19.075564578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075585 containerd[1552]: time="2025-05-13T00:24:19.075580948Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075687 containerd[1552]: time="2025-05-13T00:24:19.075598071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:24:19.075687 containerd[1552]: time="2025-05-13T00:24:19.075624550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075687 containerd[1552]: time="2025-05-13T00:24:19.075642885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075687 containerd[1552]: time="2025-05-13T00:24:19.075659416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075687 containerd[1552]: time="2025-05-13T00:24:19.075675746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075691506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075709820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075725910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075741820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075758391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075776245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075792846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075807764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:24:19.075820 containerd[1552]: time="2025-05-13T00:24:19.075824095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075844192Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075869339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075901059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075917099Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075977412Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.075999524Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.076013550Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.076031283Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.076044548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:24:19.076070 containerd[1552]: time="2025-05-13T00:24:19.076061319Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:24:19.076327 containerd[1552]: time="2025-05-13T00:24:19.076088541Z" level=info msg="NRI interface is disabled by configuration." May 13 00:24:19.076327 containerd[1552]: time="2025-05-13T00:24:19.076113878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:24:19.076558 containerd[1552]: time="2025-05-13T00:24:19.076462221Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:24:19.076558 containerd[1552]: time="2025-05-13T00:24:19.076548173Z" level=info msg="Connect containerd service" May 13 00:24:19.076759 containerd[1552]: time="2025-05-13T00:24:19.076588799Z" level=info msg="using legacy CRI server" May 13 00:24:19.076759 containerd[1552]: time="2025-05-13T00:24:19.076598447Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:24:19.076759 containerd[1552]: time="2025-05-13T00:24:19.076699727Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:24:19.077372 containerd[1552]: time="2025-05-13T00:24:19.077342302Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:24:19.077682 containerd[1552]: time="2025-05-13T00:24:19.077512051Z" level=info msg="Start subscribing containerd event" May 13 00:24:19.077682 containerd[1552]: time="2025-05-13T00:24:19.077590197Z" level=info msg="Start recovering state" May 13 00:24:19.077968 containerd[1552]: time="2025-05-13T00:24:19.077936457Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078108359Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078162310Z" level=info msg="Start event monitor" May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078192026Z" level=info msg="Start snapshots syncer" May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078208056Z" level=info msg="Start cni network conf syncer for default" May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078229256Z" level=info msg="Start streaming server" May 13 00:24:19.078595 containerd[1552]: time="2025-05-13T00:24:19.078312492Z" level=info msg="containerd successfully booted in 0.367006s" May 13 00:24:19.078834 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:24:19.485085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:19.487042 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:24:19.489017 systemd[1]: Startup finished in 7.081s (kernel) + 4.886s (userspace) = 11.968s. May 13 00:24:19.489791 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:24:19.926186 kubelet[1649]: E0513 00:24:19.926063 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:24:19.930423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:24:19.930775 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:24:21.583011 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:24:21.605134 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). May 13 00:24:21.646786 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:21.648825 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:21.657301 systemd-logind[1531]: New session 1 of user core. May 13 00:24:21.658445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:24:21.670129 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:24:21.681676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:24:21.684219 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:24:21.692439 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:24:21.797183 systemd[1670]: Queued start job for default target default.target. May 13 00:24:21.797563 systemd[1670]: Created slice app.slice - User Application Slice. May 13 00:24:21.797580 systemd[1670]: Reached target paths.target - Paths. May 13 00:24:21.797593 systemd[1670]: Reached target timers.target - Timers. May 13 00:24:21.809968 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:24:21.817248 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:24:21.817335 systemd[1670]: Reached target sockets.target - Sockets. May 13 00:24:21.817354 systemd[1670]: Reached target basic.target - Basic System. May 13 00:24:21.817403 systemd[1670]: Reached target default.target - Main User Target. May 13 00:24:21.817447 systemd[1670]: Startup finished in 118ms. May 13 00:24:21.817948 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:24:21.819355 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:24:21.881168 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). May 13 00:24:21.912765 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:21.914291 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:21.918430 systemd-logind[1531]: New session 2 of user core. May 13 00:24:21.927132 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:24:21.980615 sshd[1682]: pam_unix(sshd:session): session closed for user core May 13 00:24:21.993100 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:35348.service - OpenSSH per-connection server daemon (10.0.0.1:35348). May 13 00:24:21.993550 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:35342.service: Deactivated successfully. May 13 00:24:21.995782 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. May 13 00:24:21.996339 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:24:21.997864 systemd-logind[1531]: Removed session 2. May 13 00:24:22.024960 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:22.026319 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:22.029804 systemd-logind[1531]: New session 3 of user core. May 13 00:24:22.039121 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:24:22.087530 sshd[1687]: pam_unix(sshd:session): session closed for user core May 13 00:24:22.099098 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:35362.service - OpenSSH per-connection server daemon (10.0.0.1:35362). May 13 00:24:22.099548 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:35348.service: Deactivated successfully. May 13 00:24:22.101691 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. May 13 00:24:22.102306 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:24:22.103626 systemd-logind[1531]: Removed session 3. May 13 00:24:22.130533 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 35362 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:22.132141 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:22.136324 systemd-logind[1531]: New session 4 of user core. May 13 00:24:22.152191 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:24:22.206443 sshd[1695]: pam_unix(sshd:session): session closed for user core May 13 00:24:22.224204 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:35378.service - OpenSSH per-connection server daemon (10.0.0.1:35378). May 13 00:24:22.224962 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:35362.service: Deactivated successfully. May 13 00:24:22.226602 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:24:22.227419 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. May 13 00:24:22.228565 systemd-logind[1531]: Removed session 4. May 13 00:24:22.255297 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 35378 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:22.256751 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:22.260837 systemd-logind[1531]: New session 5 of user core. May 13 00:24:22.270163 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:24:22.327933 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:24:22.328299 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:24:22.611095 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:24:22.611316 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:24:23.100032 dockerd[1728]: time="2025-05-13T00:24:23.099503697Z" level=info msg="Starting up" May 13 00:24:23.808867 systemd[1]: var-lib-docker-metacopy\x2dcheck859258213-merged.mount: Deactivated successfully. May 13 00:24:23.895202 dockerd[1728]: time="2025-05-13T00:24:23.895091883Z" level=info msg="Loading containers: start." May 13 00:24:24.103973 kernel: Initializing XFRM netlink socket May 13 00:24:24.304165 systemd-networkd[1240]: docker0: Link UP May 13 00:24:24.348098 dockerd[1728]: time="2025-05-13T00:24:24.348030136Z" level=info msg="Loading containers: done." May 13 00:24:24.552065 dockerd[1728]: time="2025-05-13T00:24:24.551995717Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:24:24.552304 dockerd[1728]: time="2025-05-13T00:24:24.552198648Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:24:24.552391 dockerd[1728]: time="2025-05-13T00:24:24.552358007Z" level=info msg="Daemon has completed initialization" May 13 00:24:24.860833 dockerd[1728]: time="2025-05-13T00:24:24.860766793Z" level=info msg="API listen on /run/docker.sock" May 13 00:24:24.861094 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:24:25.653257 containerd[1552]: time="2025-05-13T00:24:25.653196766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:24:26.339374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882165674.mount: Deactivated successfully. May 13 00:24:27.296854 containerd[1552]: time="2025-05-13T00:24:27.296796549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:27.297459 containerd[1552]: time="2025-05-13T00:24:27.297405982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 00:24:27.298556 containerd[1552]: time="2025-05-13T00:24:27.298526294Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:27.301297 containerd[1552]: time="2025-05-13T00:24:27.301260121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:27.302347 containerd[1552]: time="2025-05-13T00:24:27.302306915Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.64907333s" May 13 00:24:27.302394 containerd[1552]: time="2025-05-13T00:24:27.302349525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:24:27.322772 containerd[1552]: time="2025-05-13T00:24:27.322729873Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:24:29.952274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:24:29.961090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:30.100524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:30.105946 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:24:30.282456 kubelet[1960]: E0513 00:24:30.282258 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:24:30.290594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:24:30.290903 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:24:30.758969 containerd[1552]: time="2025-05-13T00:24:30.758872424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:30.762716 containerd[1552]: time="2025-05-13T00:24:30.762623530Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 00:24:30.766839 containerd[1552]: time="2025-05-13T00:24:30.766789444Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:30.773003 containerd[1552]: time="2025-05-13T00:24:30.772969336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:30.774347 containerd[1552]: time="2025-05-13T00:24:30.774309439Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.451534302s" May 13 00:24:30.774347 containerd[1552]: time="2025-05-13T00:24:30.774340317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:24:30.800465 containerd[1552]: time="2025-05-13T00:24:30.800425325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:24:33.041219 containerd[1552]: time="2025-05-13T00:24:33.041160744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:33.073088 containerd[1552]: time="2025-05-13T00:24:33.073006087Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 00:24:33.092295 containerd[1552]: time="2025-05-13T00:24:33.092210188Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:33.127965 containerd[1552]: time="2025-05-13T00:24:33.127868934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:33.129156 containerd[1552]: time="2025-05-13T00:24:33.129104462Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.32863808s" May 13 00:24:33.129156 containerd[1552]: time="2025-05-13T00:24:33.129154746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:24:33.152484 containerd[1552]: time="2025-05-13T00:24:33.152449300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:24:35.709599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685574110.mount: Deactivated successfully. May 13 00:24:36.624728 containerd[1552]: time="2025-05-13T00:24:36.624675033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:36.625757 containerd[1552]: time="2025-05-13T00:24:36.625529445Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 00:24:36.627062 containerd[1552]: time="2025-05-13T00:24:36.626850664Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:36.631192 containerd[1552]: time="2025-05-13T00:24:36.631141011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:36.632069 containerd[1552]: time="2025-05-13T00:24:36.632029548Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 3.479543329s" May 13 00:24:36.632069 containerd[1552]: time="2025-05-13T00:24:36.632064303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:24:36.656855 containerd[1552]: time="2025-05-13T00:24:36.656581841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:24:37.233093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432022776.mount: Deactivated successfully. May 13 00:24:37.888431 containerd[1552]: time="2025-05-13T00:24:37.888377327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:37.889349 containerd[1552]: time="2025-05-13T00:24:37.889322310Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 00:24:37.890755 containerd[1552]: time="2025-05-13T00:24:37.890704472Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:37.894164 containerd[1552]: time="2025-05-13T00:24:37.894128485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:37.895506 containerd[1552]: time="2025-05-13T00:24:37.895467206Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.23884031s" May 13 00:24:37.895543 containerd[1552]: time="2025-05-13T00:24:37.895504846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:24:37.917710 containerd[1552]: time="2025-05-13T00:24:37.917673038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:24:38.580458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162614372.mount: Deactivated successfully. May 13 00:24:38.584961 containerd[1552]: time="2025-05-13T00:24:38.584906367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:38.585588 containerd[1552]: time="2025-05-13T00:24:38.585535066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 00:24:38.586989 containerd[1552]: time="2025-05-13T00:24:38.586951633Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:38.589045 containerd[1552]: time="2025-05-13T00:24:38.589000636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:38.589710 containerd[1552]: time="2025-05-13T00:24:38.589675562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 671.965315ms" May 13 00:24:38.589710 containerd[1552]: time="2025-05-13T00:24:38.589704446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:24:38.610662 containerd[1552]: time="2025-05-13T00:24:38.610623354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:24:39.172189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23826110.mount: Deactivated successfully. May 13 00:24:40.452207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:24:40.466051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:40.607564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:40.612017 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:24:41.024606 kubelet[2117]: E0513 00:24:41.024542 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:24:41.028912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:24:41.029191 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:24:41.294603 containerd[1552]: time="2025-05-13T00:24:41.294481775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:41.295317 containerd[1552]: time="2025-05-13T00:24:41.295264053Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 00:24:41.296616 containerd[1552]: time="2025-05-13T00:24:41.296574030Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:41.299315 containerd[1552]: time="2025-05-13T00:24:41.299286307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:41.300388 containerd[1552]: time="2025-05-13T00:24:41.300356134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.689693276s" May 13 00:24:41.300388 containerd[1552]: time="2025-05-13T00:24:41.300385599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:24:43.081371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:43.091072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:43.109742 systemd[1]: Reloading requested from client PID 2209 ('systemctl') (unit session-5.scope)... May 13 00:24:43.109758 systemd[1]: Reloading... May 13 00:24:43.182953 zram_generator::config[2248]: No configuration found. May 13 00:24:43.562457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:24:43.643438 systemd[1]: Reloading finished in 533 ms. May 13 00:24:43.691919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:24:43.692019 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:24:43.692395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:43.694259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:43.848024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:43.864380 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:24:43.905510 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:24:43.905510 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:24:43.905510 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:24:43.905988 kubelet[2308]: I0513 00:24:43.905537 2308 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:24:44.232650 kubelet[2308]: I0513 00:24:44.232601 2308 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:24:44.232650 kubelet[2308]: I0513 00:24:44.232634 2308 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:24:44.232864 kubelet[2308]: I0513 00:24:44.232848 2308 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:24:44.245247 kubelet[2308]: I0513 00:24:44.245196 2308 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:24:44.245802 kubelet[2308]: E0513 00:24:44.245768 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.257414 kubelet[2308]: I0513 00:24:44.257380 2308 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:24:44.258697 kubelet[2308]: I0513 00:24:44.258647 2308 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:24:44.258863 kubelet[2308]: I0513 00:24:44.258679 2308 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:24:44.259271 kubelet[2308]: I0513 00:24:44.259246 2308 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:24:44.259271 kubelet[2308]: I0513 00:24:44.259261 2308 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:24:44.259423 kubelet[2308]: I0513 00:24:44.259400 2308 state_mem.go:36] "Initialized new in-memory state store" May 13 00:24:44.260014 kubelet[2308]: I0513 00:24:44.259985 2308 kubelet.go:400] "Attempting to sync node with API server" May 13 00:24:44.260014 kubelet[2308]: I0513 00:24:44.260003 2308 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:24:44.260088 kubelet[2308]: I0513 00:24:44.260027 2308 kubelet.go:312] "Adding apiserver pod source" May 13 00:24:44.260088 kubelet[2308]: I0513 00:24:44.260042 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:24:44.263512 kubelet[2308]: W0513 00:24:44.263366 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.263512 kubelet[2308]: E0513 00:24:44.263445 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.263512 kubelet[2308]: W0513 00:24:44.263435 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.263512 kubelet[2308]: E0513 00:24:44.263482 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.263928 kubelet[2308]: I0513 00:24:44.263900 2308 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:24:44.265136 kubelet[2308]: I0513 00:24:44.265117 2308 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:24:44.265207 kubelet[2308]: W0513 00:24:44.265184 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:24:44.267246 kubelet[2308]: I0513 00:24:44.265849 2308 server.go:1264] "Started kubelet" May 13 00:24:44.267246 kubelet[2308]: I0513 00:24:44.265940 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:24:44.267246 kubelet[2308]: I0513 00:24:44.266136 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:24:44.267246 kubelet[2308]: I0513 00:24:44.266484 2308 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:24:44.267246 kubelet[2308]: I0513 00:24:44.267110 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:24:44.267610 kubelet[2308]: I0513 00:24:44.267574 2308 server.go:455] "Adding debug handlers to kubelet server" May 13 00:24:44.270894 kubelet[2308]: I0513 00:24:44.270851 2308 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:24:44.271175 kubelet[2308]: I0513 00:24:44.271006 2308 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:24:44.271175 kubelet[2308]: I0513 00:24:44.271071 2308 reconciler.go:26] "Reconciler: start to sync state" May 13 00:24:44.271467 kubelet[2308]: W0513 00:24:44.271426 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.271518 kubelet[2308]: E0513 00:24:44.271477 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.271739 kubelet[2308]: E0513 00:24:44.271702 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" May 13 00:24:44.271928 kubelet[2308]: E0513 00:24:44.270829 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee7342361943 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:24:44.265822531 +0000 UTC m=+0.397346199,LastTimestamp:2025-05-13 00:24:44.265822531 +0000 UTC m=+0.397346199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:24:44.272708 kubelet[2308]: I0513 00:24:44.272692 2308 factory.go:221] Registration of the systemd container factory successfully May 13 00:24:44.272878 kubelet[2308]: I0513 00:24:44.272863 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:24:44.277467 kubelet[2308]: I0513 00:24:44.277434 2308 factory.go:221] Registration of the containerd container factory successfully May 13 00:24:44.278452 kubelet[2308]: E0513 00:24:44.278410 2308 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:24:44.291137 kubelet[2308]: I0513 00:24:44.291061 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:24:44.292998 kubelet[2308]: I0513 00:24:44.292800 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:24:44.292998 kubelet[2308]: I0513 00:24:44.292830 2308 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:24:44.292998 kubelet[2308]: I0513 00:24:44.292849 2308 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:24:44.292998 kubelet[2308]: E0513 00:24:44.292934 2308 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:24:44.294429 kubelet[2308]: W0513 00:24:44.294249 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.294429 kubelet[2308]: E0513 00:24:44.294303 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:44.305720 kubelet[2308]: I0513 00:24:44.305669 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:24:44.305720 kubelet[2308]: I0513 00:24:44.305701 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:24:44.305720 kubelet[2308]: I0513 00:24:44.305728 2308 state_mem.go:36] "Initialized new in-memory state store" May 13 00:24:44.372428 kubelet[2308]: I0513 00:24:44.372377 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:44.372723 kubelet[2308]: E0513 00:24:44.372693 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 13 00:24:44.393992 kubelet[2308]: E0513 00:24:44.393923 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:24:44.472706 kubelet[2308]: E0513 00:24:44.472635 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" May 13 00:24:44.574314 kubelet[2308]: I0513 00:24:44.574192 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:44.575048 kubelet[2308]: E0513 00:24:44.574993 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 13 00:24:44.594130 kubelet[2308]: E0513 00:24:44.594082 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:24:44.873437 kubelet[2308]: E0513 00:24:44.873288 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" May 13 00:24:44.977255 kubelet[2308]: I0513 00:24:44.977200 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:44.977795 kubelet[2308]: E0513 00:24:44.977594 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 13 00:24:44.994709 kubelet[2308]: E0513 00:24:44.994654 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:24:45.052005 kubelet[2308]: I0513 00:24:45.051968 2308 policy_none.go:49] "None policy: Start" May 13 00:24:45.052762 kubelet[2308]: I0513 00:24:45.052724 2308 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:24:45.052762 kubelet[2308]: I0513 00:24:45.052751 2308 state_mem.go:35] "Initializing new in-memory state store" May 13 00:24:45.060843 kubelet[2308]: I0513 00:24:45.060808 2308 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:24:45.061104 kubelet[2308]: I0513 00:24:45.061057 2308 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:24:45.061205 kubelet[2308]: I0513 00:24:45.061184 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:24:45.062788 kubelet[2308]: E0513 00:24:45.062755 2308 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:24:45.160072 kubelet[2308]: W0513 00:24:45.159871 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.160072 kubelet[2308]: E0513 00:24:45.159978 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.208202 kubelet[2308]: W0513 00:24:45.208126 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.208202 kubelet[2308]: E0513 00:24:45.208201 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.576634 kubelet[2308]: W0513 00:24:45.576543 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.576634 kubelet[2308]: E0513 00:24:45.576628 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.674333 kubelet[2308]: E0513 00:24:45.674283 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" May 13 00:24:45.779287 kubelet[2308]: I0513 00:24:45.779242 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:45.779580 kubelet[2308]: E0513 00:24:45.779531 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 13 00:24:45.795730 kubelet[2308]: I0513 00:24:45.795678 2308 topology_manager.go:215] "Topology Admit Handler" podUID="1b4cbe1c48054e2c3ff0187b1155cfd0" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:24:45.796915 kubelet[2308]: I0513 00:24:45.796818 2308 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:24:45.797737 kubelet[2308]: I0513 00:24:45.797696 2308 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:24:45.798688 kubelet[2308]: W0513 00:24:45.798410 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.798688 kubelet[2308]: E0513 00:24:45.798439 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:45.879558 kubelet[2308]: I0513 00:24:45.879432 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:45.879558 kubelet[2308]: I0513 00:24:45.879467 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:45.879558 kubelet[2308]: I0513 00:24:45.879503 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:45.879558 kubelet[2308]: I0513 00:24:45.879525 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:45.879765 kubelet[2308]: I0513 00:24:45.879580 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:24:45.879765 kubelet[2308]: I0513 00:24:45.879593 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:45.879765 kubelet[2308]: I0513 00:24:45.879606 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:45.879765 kubelet[2308]: I0513 00:24:45.879620 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:45.879765 kubelet[2308]: I0513 00:24:45.879633 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:46.101770 kubelet[2308]: E0513 00:24:46.101722 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:46.102400 containerd[1552]: time="2025-05-13T00:24:46.102372264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b4cbe1c48054e2c3ff0187b1155cfd0,Namespace:kube-system,Attempt:0,}" May 13 00:24:46.103559 kubelet[2308]: E0513 00:24:46.103518 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:46.103624 kubelet[2308]: E0513 00:24:46.103597 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:46.103822 containerd[1552]: time="2025-05-13T00:24:46.103797116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:24:46.103914 containerd[1552]: time="2025-05-13T00:24:46.103874161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:24:46.273573 kubelet[2308]: E0513 00:24:46.273524 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:47.275081 kubelet[2308]: E0513 00:24:47.275036 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="3.2s" May 13 00:24:47.372301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693874165.mount: Deactivated successfully. May 13 00:24:47.380802 containerd[1552]: time="2025-05-13T00:24:47.380764883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:24:47.381221 kubelet[2308]: I0513 00:24:47.380798 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:47.381313 kubelet[2308]: E0513 00:24:47.381279 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 13 00:24:47.383352 containerd[1552]: time="2025-05-13T00:24:47.383288597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:24:47.384853 containerd[1552]: time="2025-05-13T00:24:47.384813297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:24:47.386107 containerd[1552]: time="2025-05-13T00:24:47.386076797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:24:47.387374 containerd[1552]: time="2025-05-13T00:24:47.387309409Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:24:47.388399 containerd[1552]: time="2025-05-13T00:24:47.388367323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:24:47.389741 containerd[1552]: time="2025-05-13T00:24:47.389692218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:24:47.392132 containerd[1552]: time="2025-05-13T00:24:47.392094193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:24:47.395331 containerd[1552]: time="2025-05-13T00:24:47.395232921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.2927821s" May 13 00:24:47.396244 containerd[1552]: time="2025-05-13T00:24:47.396215925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.292271502s" May 13 00:24:47.397222 containerd[1552]: time="2025-05-13T00:24:47.397128306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.293281356s" May 13 00:24:47.607606 containerd[1552]: time="2025-05-13T00:24:47.606560932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:47.607606 containerd[1552]: time="2025-05-13T00:24:47.607446623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:47.607606 containerd[1552]: time="2025-05-13T00:24:47.607460439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.607606 containerd[1552]: time="2025-05-13T00:24:47.607570395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608527350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608583566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608604695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608702028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608740600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:47.608771 containerd[1552]: time="2025-05-13T00:24:47.608718018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.609759 containerd[1552]: time="2025-05-13T00:24:47.608751400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.609759 containerd[1552]: time="2025-05-13T00:24:47.609344593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:47.675284 containerd[1552]: time="2025-05-13T00:24:47.675245885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9db5718bec5a270868f7434c9fe717fbbd4c0f5708c4ce504d2404f2626fc42\"" May 13 00:24:47.676949 containerd[1552]: time="2025-05-13T00:24:47.675762985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b4cbe1c48054e2c3ff0187b1155cfd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6685d0e4699b251eb00c2fb4faec0b49f7c99d1d034709a9ba9ac6d3fdff69c5\"" May 13 00:24:47.677198 containerd[1552]: time="2025-05-13T00:24:47.677138295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d934dc6abf4988545ae1d0b2a1845867539dd4b88db555e4714f15fc2189b9\"" May 13 00:24:47.680903 kubelet[2308]: E0513 00:24:47.679795 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:47.680903 kubelet[2308]: E0513 00:24:47.679863 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:47.680903 kubelet[2308]: E0513 00:24:47.679795 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:47.683962 containerd[1552]: time="2025-05-13T00:24:47.683930224Z" level=info msg="CreateContainer within sandbox \"a9db5718bec5a270868f7434c9fe717fbbd4c0f5708c4ce504d2404f2626fc42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:24:47.684127 containerd[1552]: time="2025-05-13T00:24:47.684095965Z" level=info msg="CreateContainer within sandbox \"6685d0e4699b251eb00c2fb4faec0b49f7c99d1d034709a9ba9ac6d3fdff69c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:24:47.684247 containerd[1552]: time="2025-05-13T00:24:47.684227191Z" level=info msg="CreateContainer within sandbox \"e3d934dc6abf4988545ae1d0b2a1845867539dd4b88db555e4714f15fc2189b9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:24:47.720613 containerd[1552]: time="2025-05-13T00:24:47.720543140Z" level=info msg="CreateContainer within sandbox \"6685d0e4699b251eb00c2fb4faec0b49f7c99d1d034709a9ba9ac6d3fdff69c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"772b0f5965f5e70722d1d75f175727991760dee30684a7fa7a2242305d623fa9\"" May 13 00:24:47.721411 containerd[1552]: time="2025-05-13T00:24:47.721360563Z" level=info msg="StartContainer for \"772b0f5965f5e70722d1d75f175727991760dee30684a7fa7a2242305d623fa9\"" May 13 00:24:47.724471 containerd[1552]: time="2025-05-13T00:24:47.724402980Z" level=info msg="CreateContainer within sandbox \"a9db5718bec5a270868f7434c9fe717fbbd4c0f5708c4ce504d2404f2626fc42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dce5f5763312307e6a2dd886bc6427047599dc977fa13bc0d2c3c66fdc92bb99\"" May 13 00:24:47.725085 containerd[1552]: time="2025-05-13T00:24:47.725030858Z" level=info msg="StartContainer for \"dce5f5763312307e6a2dd886bc6427047599dc977fa13bc0d2c3c66fdc92bb99\"" May 13 00:24:47.731177 containerd[1552]: time="2025-05-13T00:24:47.731083320Z" level=info msg="CreateContainer within sandbox \"e3d934dc6abf4988545ae1d0b2a1845867539dd4b88db555e4714f15fc2189b9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63048ba10c468ae82f067252b6c309226f890ca4234cdfacd133d02a6bff0764\"" May 13 00:24:47.731712 containerd[1552]: time="2025-05-13T00:24:47.731672205Z" level=info msg="StartContainer for \"63048ba10c468ae82f067252b6c309226f890ca4234cdfacd133d02a6bff0764\"" May 13 00:24:47.802604 kubelet[2308]: W0513 00:24:47.802546 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:47.802752 kubelet[2308]: E0513 00:24:47.802736 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:47.814255 containerd[1552]: time="2025-05-13T00:24:47.814198871Z" level=info msg="StartContainer for \"dce5f5763312307e6a2dd886bc6427047599dc977fa13bc0d2c3c66fdc92bb99\" returns successfully" May 13 00:24:47.814362 containerd[1552]: time="2025-05-13T00:24:47.814220542Z" level=info msg="StartContainer for \"63048ba10c468ae82f067252b6c309226f890ca4234cdfacd133d02a6bff0764\" returns successfully" May 13 00:24:47.814385 containerd[1552]: time="2025-05-13T00:24:47.814224129Z" level=info msg="StartContainer for \"772b0f5965f5e70722d1d75f175727991760dee30684a7fa7a2242305d623fa9\" returns successfully" May 13 00:24:47.817251 kubelet[2308]: W0513 00:24:47.817179 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:47.817301 kubelet[2308]: E0513 00:24:47.817259 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 13 00:24:48.307259 kubelet[2308]: E0513 00:24:48.307223 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:48.309821 kubelet[2308]: E0513 00:24:48.307874 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:48.309821 kubelet[2308]: E0513 00:24:48.309742 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:49.213685 kubelet[2308]: E0513 00:24:49.213634 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:24:49.268314 kubelet[2308]: I0513 00:24:49.268289 2308 apiserver.go:52] "Watching apiserver" May 13 00:24:49.272039 kubelet[2308]: I0513 00:24:49.272010 2308 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:24:49.311963 kubelet[2308]: E0513 00:24:49.311935 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:49.559533 kubelet[2308]: E0513 00:24:49.559405 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:24:49.984093 kubelet[2308]: E0513 00:24:49.984003 2308 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:24:50.312780 kubelet[2308]: E0513 00:24:50.312673 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:50.478295 kubelet[2308]: E0513 00:24:50.478248 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:24:50.583543 kubelet[2308]: I0513 00:24:50.583418 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:50.587927 kubelet[2308]: I0513 00:24:50.587903 2308 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:24:51.284270 systemd[1]: Reloading requested from client PID 2584 ('systemctl') (unit session-5.scope)... May 13 00:24:51.284286 systemd[1]: Reloading... May 13 00:24:51.346930 zram_generator::config[2626]: No configuration found. May 13 00:24:51.457126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:24:51.541297 systemd[1]: Reloading finished in 256 ms. May 13 00:24:51.575754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:51.576037 kubelet[2308]: I0513 00:24:51.575814 2308 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:24:51.600338 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:24:51.600802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:51.611110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:24:51.766561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:24:51.775269 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:24:51.818740 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:24:51.818740 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:24:51.818740 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:24:51.818740 kubelet[2678]: I0513 00:24:51.818707 2678 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:24:51.823427 kubelet[2678]: I0513 00:24:51.823396 2678 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:24:51.823427 kubelet[2678]: I0513 00:24:51.823423 2678 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:24:51.823610 kubelet[2678]: I0513 00:24:51.823585 2678 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:24:51.824754 kubelet[2678]: I0513 00:24:51.824727 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:24:51.825903 kubelet[2678]: I0513 00:24:51.825760 2678 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:24:51.833666 kubelet[2678]: I0513 00:24:51.833638 2678 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:24:51.834283 kubelet[2678]: I0513 00:24:51.834251 2678 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:24:51.834441 kubelet[2678]: I0513 00:24:51.834276 2678 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:24:51.834509 kubelet[2678]: I0513 00:24:51.834456 2678 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:24:51.834509 kubelet[2678]: I0513 00:24:51.834466 2678 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:24:51.834509 kubelet[2678]: I0513 00:24:51.834509 2678 state_mem.go:36] "Initialized new in-memory state store" May 13 00:24:51.834613 kubelet[2678]: I0513 00:24:51.834601 2678 kubelet.go:400] "Attempting to sync node with API server" May 13 00:24:51.834638 kubelet[2678]: I0513 00:24:51.834615 2678 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:24:51.834638 kubelet[2678]: I0513 00:24:51.834637 2678 kubelet.go:312] "Adding apiserver pod source" May 13 00:24:51.834687 kubelet[2678]: I0513 00:24:51.834655 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:24:51.835150 kubelet[2678]: I0513 00:24:51.835131 2678 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.835379 2678 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.835795 2678 server.go:1264] "Started kubelet" May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.835866 2678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.836148 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.836422 2678 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:24:51.837903 kubelet[2678]: I0513 00:24:51.836805 2678 server.go:455] "Adding debug handlers to kubelet server" May 13 00:24:51.838173 kubelet[2678]: I0513 00:24:51.838160 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:24:51.840059 kubelet[2678]: E0513 00:24:51.839872 2678 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:24:51.841031 kubelet[2678]: I0513 00:24:51.840431 2678 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:24:51.841031 kubelet[2678]: I0513 00:24:51.840530 2678 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:24:51.841031 kubelet[2678]: I0513 00:24:51.840649 2678 reconciler.go:26] "Reconciler: start to sync state" May 13 00:24:51.844543 kubelet[2678]: I0513 00:24:51.844518 2678 factory.go:221] Registration of the systemd container factory successfully May 13 00:24:51.844626 kubelet[2678]: I0513 00:24:51.844602 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:24:51.845844 kubelet[2678]: I0513 00:24:51.845827 2678 factory.go:221] Registration of the containerd container factory successfully May 13 00:24:51.849873 kubelet[2678]: I0513 00:24:51.849825 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:24:51.854546 kubelet[2678]: I0513 00:24:51.853597 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:24:51.854546 kubelet[2678]: I0513 00:24:51.853638 2678 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:24:51.854546 kubelet[2678]: I0513 00:24:51.853656 2678 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:24:51.854546 kubelet[2678]: E0513 00:24:51.853712 2678 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:24:51.893840 kubelet[2678]: I0513 00:24:51.893815 2678 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:24:51.893840 kubelet[2678]: I0513 00:24:51.893832 2678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:24:51.893840 kubelet[2678]: I0513 00:24:51.893850 2678 state_mem.go:36] "Initialized new in-memory state store" May 13 00:24:51.894029 kubelet[2678]: I0513 00:24:51.893996 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:24:51.894029 kubelet[2678]: I0513 00:24:51.894006 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:24:51.894029 kubelet[2678]: I0513 00:24:51.894023 2678 policy_none.go:49] "None policy: Start" May 13 00:24:51.894674 kubelet[2678]: I0513 00:24:51.894658 2678 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:24:51.894711 kubelet[2678]: I0513 00:24:51.894684 2678 state_mem.go:35] "Initializing new in-memory state store" May 13 00:24:51.894833 kubelet[2678]: I0513 00:24:51.894823 2678 state_mem.go:75] "Updated machine memory state" May 13 00:24:51.896830 kubelet[2678]: I0513 00:24:51.896226 2678 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:24:51.896830 kubelet[2678]: I0513 00:24:51.896390 2678 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:24:51.896830 kubelet[2678]: I0513 00:24:51.896478 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:24:51.946195 kubelet[2678]: I0513 00:24:51.946152 2678 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:24:51.951944 kubelet[2678]: I0513 00:24:51.951914 2678 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:24:51.952086 kubelet[2678]: I0513 00:24:51.952007 2678 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:24:51.953828 kubelet[2678]: I0513 00:24:51.953779 2678 topology_manager.go:215] "Topology Admit Handler" podUID="1b4cbe1c48054e2c3ff0187b1155cfd0" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:24:51.953922 kubelet[2678]: I0513 00:24:51.953898 2678 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:24:51.954065 kubelet[2678]: I0513 00:24:51.953951 2678 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:24:52.141051 kubelet[2678]: I0513 00:24:52.140921 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:52.141051 kubelet[2678]: I0513 00:24:52.140961 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:52.141051 kubelet[2678]: I0513 00:24:52.140980 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:52.141051 kubelet[2678]: I0513 00:24:52.140995 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4cbe1c48054e2c3ff0187b1155cfd0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b4cbe1c48054e2c3ff0187b1155cfd0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:24:52.141051 kubelet[2678]: I0513 00:24:52.141034 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:52.141256 kubelet[2678]: I0513 00:24:52.141050 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:52.141256 kubelet[2678]: I0513 00:24:52.141063 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:52.141256 kubelet[2678]: I0513 00:24:52.141080 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:24:52.141256 kubelet[2678]: I0513 00:24:52.141094 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:24:52.262859 kubelet[2678]: E0513 00:24:52.262814 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:52.263265 kubelet[2678]: E0513 00:24:52.262911 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:52.263265 kubelet[2678]: E0513 00:24:52.262923 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:52.835045 kubelet[2678]: I0513 00:24:52.835010 2678 apiserver.go:52] "Watching apiserver" May 13 00:24:52.840904 kubelet[2678]: I0513 00:24:52.840867 2678 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:24:53.105179 kubelet[2678]: E0513 00:24:53.104934 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:53.112145 kubelet[2678]: E0513 00:24:53.111220 2678 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:24:53.112145 kubelet[2678]: E0513 00:24:53.111641 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:53.114721 kubelet[2678]: E0513 00:24:53.114690 2678 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:24:53.115158 kubelet[2678]: E0513 00:24:53.115132 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:53.115570 kubelet[2678]: I0513 00:24:53.115504 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.11549156 podStartE2EDuration="2.11549156s" podCreationTimestamp="2025-05-13 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:53.114561497 +0000 UTC m=+1.335234767" watchObservedRunningTime="2025-05-13 00:24:53.11549156 +0000 UTC m=+1.336164830" May 13 00:24:53.115680 kubelet[2678]: I0513 00:24:53.115651 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.115644122 podStartE2EDuration="2.115644122s" podCreationTimestamp="2025-05-13 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:53.104675702 +0000 UTC m=+1.325348972" watchObservedRunningTime="2025-05-13 00:24:53.115644122 +0000 UTC m=+1.336317402" May 13 00:24:53.133137 kubelet[2678]: I0513 00:24:53.133079 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.133056671 podStartE2EDuration="2.133056671s" podCreationTimestamp="2025-05-13 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:53.122523836 +0000 UTC m=+1.343197106" watchObservedRunningTime="2025-05-13 00:24:53.133056671 +0000 UTC m=+1.353729941" May 13 00:24:53.387847 sudo[1710]: pam_unix(sudo:session): session closed for user root May 13 00:24:53.390251 sshd[1704]: pam_unix(sshd:session): session closed for user core May 13 00:24:53.395337 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:35378.service: Deactivated successfully. May 13 00:24:53.398097 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:24:53.398832 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. May 13 00:24:53.399762 systemd-logind[1531]: Removed session 5. May 13 00:24:54.106611 kubelet[2678]: E0513 00:24:54.106564 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:54.106611 kubelet[2678]: E0513 00:24:54.106594 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:54.107056 kubelet[2678]: E0513 00:24:54.106714 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:55.119364 kubelet[2678]: E0513 00:24:55.117928 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:57.869577 kubelet[2678]: E0513 00:24:57.869548 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:58.122326 kubelet[2678]: E0513 00:24:58.122207 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:59.124543 kubelet[2678]: E0513 00:24:59.124496 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:00.190763 kubelet[2678]: E0513 00:25:00.190724 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:01.126785 kubelet[2678]: E0513 00:25:01.126746 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:03.029707 kubelet[2678]: E0513 00:25:03.029679 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:03.531919 update_engine[1533]: I20250513 00:25:03.531772 1533 update_attempter.cc:509] Updating boot flags... May 13 00:25:03.582005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2769) May 13 00:25:03.615951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2769) May 13 00:25:03.655441 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2769) May 13 00:25:06.216067 kubelet[2678]: I0513 00:25:06.216025 2678 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:25:06.216500 containerd[1552]: time="2025-05-13T00:25:06.216396171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:25:06.216743 kubelet[2678]: I0513 00:25:06.216653 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:25:07.203847 kubelet[2678]: I0513 00:25:07.203165 2678 topology_manager.go:215] "Topology Admit Handler" podUID="8cc33de8-3ccc-4241-8373-76a85910cecd" podNamespace="kube-system" podName="kube-proxy-dc2mp" May 13 00:25:07.207230 kubelet[2678]: I0513 00:25:07.206297 2678 topology_manager.go:215] "Topology Admit Handler" podUID="53a5885a-34f6-472d-910b-f2190c0a1a24" podNamespace="kube-flannel" podName="kube-flannel-ds-v7kh7" May 13 00:25:07.243510 kubelet[2678]: I0513 00:25:07.243474 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cc33de8-3ccc-4241-8373-76a85910cecd-xtables-lock\") pod \"kube-proxy-dc2mp\" (UID: \"8cc33de8-3ccc-4241-8373-76a85910cecd\") " pod="kube-system/kube-proxy-dc2mp" May 13 00:25:07.243510 kubelet[2678]: I0513 00:25:07.243507 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dng6m\" (UniqueName: \"kubernetes.io/projected/8cc33de8-3ccc-4241-8373-76a85910cecd-kube-api-access-dng6m\") pod \"kube-proxy-dc2mp\" (UID: \"8cc33de8-3ccc-4241-8373-76a85910cecd\") " pod="kube-system/kube-proxy-dc2mp" May 13 00:25:07.243510 kubelet[2678]: I0513 00:25:07.243522 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/53a5885a-34f6-472d-910b-f2190c0a1a24-cni-plugin\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244105 kubelet[2678]: I0513 00:25:07.243539 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/53a5885a-34f6-472d-910b-f2190c0a1a24-cni\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244105 kubelet[2678]: I0513 00:25:07.243552 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cc33de8-3ccc-4241-8373-76a85910cecd-lib-modules\") pod \"kube-proxy-dc2mp\" (UID: \"8cc33de8-3ccc-4241-8373-76a85910cecd\") " pod="kube-system/kube-proxy-dc2mp" May 13 00:25:07.244105 kubelet[2678]: I0513 00:25:07.243565 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53a5885a-34f6-472d-910b-f2190c0a1a24-xtables-lock\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244105 kubelet[2678]: I0513 00:25:07.243616 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k2pq\" (UniqueName: \"kubernetes.io/projected/53a5885a-34f6-472d-910b-f2190c0a1a24-kube-api-access-2k2pq\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244105 kubelet[2678]: I0513 00:25:07.243658 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53a5885a-34f6-472d-910b-f2190c0a1a24-run\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244216 kubelet[2678]: I0513 00:25:07.243685 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/53a5885a-34f6-472d-910b-f2190c0a1a24-flannel-cfg\") pod \"kube-flannel-ds-v7kh7\" (UID: \"53a5885a-34f6-472d-910b-f2190c0a1a24\") " pod="kube-flannel/kube-flannel-ds-v7kh7" May 13 00:25:07.244216 kubelet[2678]: I0513 00:25:07.243705 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cc33de8-3ccc-4241-8373-76a85910cecd-kube-proxy\") pod \"kube-proxy-dc2mp\" (UID: \"8cc33de8-3ccc-4241-8373-76a85910cecd\") " pod="kube-system/kube-proxy-dc2mp" May 13 00:25:07.514205 kubelet[2678]: E0513 00:25:07.514069 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:07.514205 kubelet[2678]: E0513 00:25:07.514069 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:07.515018 containerd[1552]: time="2025-05-13T00:25:07.514817143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-v7kh7,Uid:53a5885a-34f6-472d-910b-f2190c0a1a24,Namespace:kube-flannel,Attempt:0,}" May 13 00:25:07.515467 containerd[1552]: time="2025-05-13T00:25:07.515290238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dc2mp,Uid:8cc33de8-3ccc-4241-8373-76a85910cecd,Namespace:kube-system,Attempt:0,}" May 13 00:25:07.543525 containerd[1552]: time="2025-05-13T00:25:07.542166784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:25:07.543525 containerd[1552]: time="2025-05-13T00:25:07.542218472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:25:07.543525 containerd[1552]: time="2025-05-13T00:25:07.542236556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:07.543525 containerd[1552]: time="2025-05-13T00:25:07.542321977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:07.546263 containerd[1552]: time="2025-05-13T00:25:07.545824144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:25:07.546263 containerd[1552]: time="2025-05-13T00:25:07.545870282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:25:07.546263 containerd[1552]: time="2025-05-13T00:25:07.546051825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:07.546263 containerd[1552]: time="2025-05-13T00:25:07.546164970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:07.581956 containerd[1552]: time="2025-05-13T00:25:07.581872024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dc2mp,Uid:8cc33de8-3ccc-4241-8373-76a85910cecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69555715e8ac4a33b62a42c05bf8dd5b649eb1439c64572c00611b3e5817757\"" May 13 00:25:07.582643 kubelet[2678]: E0513 00:25:07.582623 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:07.588166 containerd[1552]: time="2025-05-13T00:25:07.588132911Z" level=info msg="CreateContainer within sandbox \"a69555715e8ac4a33b62a42c05bf8dd5b649eb1439c64572c00611b3e5817757\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:25:07.597476 containerd[1552]: time="2025-05-13T00:25:07.597389647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-v7kh7,Uid:53a5885a-34f6-472d-910b-f2190c0a1a24,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\"" May 13 00:25:07.597999 kubelet[2678]: E0513 00:25:07.597983 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:07.598846 containerd[1552]: time="2025-05-13T00:25:07.598816937Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:25:07.604689 containerd[1552]: time="2025-05-13T00:25:07.604654463Z" level=info msg="CreateContainer within sandbox \"a69555715e8ac4a33b62a42c05bf8dd5b649eb1439c64572c00611b3e5817757\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"814ddd4300fe5bd6b609e924f7980c4744182c8cf1782b8969d46a1b5a2734dc\"" May 13 00:25:07.605150 containerd[1552]: time="2025-05-13T00:25:07.605103563Z" level=info msg="StartContainer for \"814ddd4300fe5bd6b609e924f7980c4744182c8cf1782b8969d46a1b5a2734dc\"" May 13 00:25:07.674739 containerd[1552]: time="2025-05-13T00:25:07.674696877Z" level=info msg="StartContainer for \"814ddd4300fe5bd6b609e924f7980c4744182c8cf1782b8969d46a1b5a2734dc\" returns successfully" May 13 00:25:08.141713 kubelet[2678]: E0513 00:25:08.141683 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:09.251551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231278298.mount: Deactivated successfully. May 13 00:25:09.289350 containerd[1552]: time="2025-05-13T00:25:09.289298785Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:09.290077 containerd[1552]: time="2025-05-13T00:25:09.290038894Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" May 13 00:25:09.291163 containerd[1552]: time="2025-05-13T00:25:09.291125327Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:09.293416 containerd[1552]: time="2025-05-13T00:25:09.293373658Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:09.294190 containerd[1552]: time="2025-05-13T00:25:09.294155886Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.695298451s" May 13 00:25:09.294238 containerd[1552]: time="2025-05-13T00:25:09.294189370Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 00:25:09.296310 containerd[1552]: time="2025-05-13T00:25:09.296266646Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:25:09.308766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430955391.mount: Deactivated successfully. May 13 00:25:09.309255 containerd[1552]: time="2025-05-13T00:25:09.309155218Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8\"" May 13 00:25:09.309684 containerd[1552]: time="2025-05-13T00:25:09.309652027Z" level=info msg="StartContainer for \"1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8\"" May 13 00:25:09.362993 containerd[1552]: time="2025-05-13T00:25:09.362934080Z" level=info msg="StartContainer for \"1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8\" returns successfully" May 13 00:25:09.380167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8-rootfs.mount: Deactivated successfully. May 13 00:25:09.869207 containerd[1552]: time="2025-05-13T00:25:09.868935804Z" level=info msg="shim disconnected" id=1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8 namespace=k8s.io May 13 00:25:09.869207 containerd[1552]: time="2025-05-13T00:25:09.869015906Z" level=warning msg="cleaning up after shim disconnected" id=1d76eb7fa5a5d77dc0d131fcb5521c44a0c7f3d0712b50b41faba1087fb5f2a8 namespace=k8s.io May 13 00:25:09.869207 containerd[1552]: time="2025-05-13T00:25:09.869029051Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:10.146589 kubelet[2678]: E0513 00:25:10.146476 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:10.147158 containerd[1552]: time="2025-05-13T00:25:10.147109927Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:25:10.155198 kubelet[2678]: I0513 00:25:10.155146 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dc2mp" podStartSLOduration=3.155133656 podStartE2EDuration="3.155133656s" podCreationTimestamp="2025-05-13 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:25:08.149931249 +0000 UTC m=+16.370604519" watchObservedRunningTime="2025-05-13 00:25:10.155133656 +0000 UTC m=+18.375806926" May 13 00:25:11.821402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707534844.mount: Deactivated successfully. May 13 00:25:12.565173 containerd[1552]: time="2025-05-13T00:25:12.565116648Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:12.565923 containerd[1552]: time="2025-05-13T00:25:12.565847167Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 13 00:25:12.567190 containerd[1552]: time="2025-05-13T00:25:12.567141861Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:12.569829 containerd[1552]: time="2025-05-13T00:25:12.569792374Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:25:12.570854 containerd[1552]: time="2025-05-13T00:25:12.570812940Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.423667958s" May 13 00:25:12.570854 containerd[1552]: time="2025-05-13T00:25:12.570852766Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 00:25:12.572769 containerd[1552]: time="2025-05-13T00:25:12.572745338Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:25:12.583172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745574929.mount: Deactivated successfully. May 13 00:25:12.584398 containerd[1552]: time="2025-05-13T00:25:12.584339063Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493\"" May 13 00:25:12.589051 containerd[1552]: time="2025-05-13T00:25:12.589022995Z" level=info msg="StartContainer for \"f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493\"" May 13 00:25:12.640090 containerd[1552]: time="2025-05-13T00:25:12.640053783Z" level=info msg="StartContainer for \"f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493\" returns successfully" May 13 00:25:12.653975 kubelet[2678]: I0513 00:25:12.653397 2678 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:25:12.662057 containerd[1552]: time="2025-05-13T00:25:12.661998146Z" level=info msg="shim disconnected" id=f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493 namespace=k8s.io May 13 00:25:12.662247 containerd[1552]: time="2025-05-13T00:25:12.662054362Z" level=warning msg="cleaning up after shim disconnected" id=f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493 namespace=k8s.io May 13 00:25:12.662247 containerd[1552]: time="2025-05-13T00:25:12.662075482Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:12.678320 kubelet[2678]: I0513 00:25:12.678265 2678 topology_manager.go:215] "Topology Admit Handler" podUID="0dfd42aa-d107-478f-b4b2-7682e21388a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-srk9x" May 13 00:25:12.678589 kubelet[2678]: I0513 00:25:12.678461 2678 topology_manager.go:215] "Topology Admit Handler" podUID="7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6m8mn" May 13 00:25:12.684632 kubelet[2678]: I0513 00:25:12.684597 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf54m\" (UniqueName: \"kubernetes.io/projected/0dfd42aa-d107-478f-b4b2-7682e21388a7-kube-api-access-tf54m\") pod \"coredns-7db6d8ff4d-srk9x\" (UID: \"0dfd42aa-d107-478f-b4b2-7682e21388a7\") " pod="kube-system/coredns-7db6d8ff4d-srk9x" May 13 00:25:12.684632 kubelet[2678]: I0513 00:25:12.684634 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfd42aa-d107-478f-b4b2-7682e21388a7-config-volume\") pod \"coredns-7db6d8ff4d-srk9x\" (UID: \"0dfd42aa-d107-478f-b4b2-7682e21388a7\") " pod="kube-system/coredns-7db6d8ff4d-srk9x" May 13 00:25:12.684822 kubelet[2678]: I0513 00:25:12.684652 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zm76\" (UniqueName: \"kubernetes.io/projected/7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf-kube-api-access-6zm76\") pod \"coredns-7db6d8ff4d-6m8mn\" (UID: \"7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf\") " pod="kube-system/coredns-7db6d8ff4d-6m8mn" May 13 00:25:12.684822 kubelet[2678]: I0513 00:25:12.684672 2678 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf-config-volume\") pod \"coredns-7db6d8ff4d-6m8mn\" (UID: \"7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf\") " pod="kube-system/coredns-7db6d8ff4d-6m8mn" May 13 00:25:12.737812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f56aa8e8d875d94848d8ceed7931ddded29bca72731dc8c82f52533cf5035493-rootfs.mount: Deactivated successfully. May 13 00:25:12.982110 kubelet[2678]: E0513 00:25:12.982065 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:12.982842 containerd[1552]: time="2025-05-13T00:25:12.982689449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6m8mn,Uid:7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf,Namespace:kube-system,Attempt:0,}" May 13 00:25:12.984375 kubelet[2678]: E0513 00:25:12.984319 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:12.984781 containerd[1552]: time="2025-05-13T00:25:12.984741613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srk9x,Uid:0dfd42aa-d107-478f-b4b2-7682e21388a7,Namespace:kube-system,Attempt:0,}" May 13 00:25:13.017783 containerd[1552]: time="2025-05-13T00:25:13.017732616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6m8mn,Uid:7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:25:13.018206 kubelet[2678]: E0513 00:25:13.018130 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:25:13.018206 kubelet[2678]: E0513 00:25:13.018197 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6m8mn" May 13 00:25:13.018323 kubelet[2678]: E0513 00:25:13.018218 2678 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6m8mn" May 13 00:25:13.018323 kubelet[2678]: E0513 00:25:13.018253 2678 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6m8mn_kube-system(7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6m8mn_kube-system(7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-6m8mn" podUID="7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf" May 13 00:25:13.019791 containerd[1552]: time="2025-05-13T00:25:13.019755453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srk9x,Uid:0dfd42aa-d107-478f-b4b2-7682e21388a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:25:13.019955 kubelet[2678]: E0513 00:25:13.019901 2678 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:25:13.019955 kubelet[2678]: E0513 00:25:13.019927 2678 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-srk9x" May 13 00:25:13.019955 kubelet[2678]: E0513 00:25:13.019943 2678 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-srk9x" May 13 00:25:13.020040 kubelet[2678]: E0513 00:25:13.019973 2678 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srk9x_kube-system(0dfd42aa-d107-478f-b4b2-7682e21388a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srk9x_kube-system(0dfd42aa-d107-478f-b4b2-7682e21388a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-srk9x" podUID="0dfd42aa-d107-478f-b4b2-7682e21388a7" May 13 00:25:13.150958 kubelet[2678]: E0513 00:25:13.150931 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:13.152407 containerd[1552]: time="2025-05-13T00:25:13.152287108Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:25:13.247362 containerd[1552]: time="2025-05-13T00:25:13.247211650Z" level=info msg="CreateContainer within sandbox \"77220320edcc5a0a66eccdae85e9595a843cadc94e0ca3ff0ee36dc0cecc61a3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d1a878cae0d67f81334b7967683533d61f16fb0ea9271a0025d08a46f43f23a1\"" May 13 00:25:13.248034 containerd[1552]: time="2025-05-13T00:25:13.248007482Z" level=info msg="StartContainer for \"d1a878cae0d67f81334b7967683533d61f16fb0ea9271a0025d08a46f43f23a1\"" May 13 00:25:13.300288 containerd[1552]: time="2025-05-13T00:25:13.300247033Z" level=info msg="StartContainer for \"d1a878cae0d67f81334b7967683533d61f16fb0ea9271a0025d08a46f43f23a1\" returns successfully" May 13 00:25:13.738836 systemd[1]: run-netns-cni\x2dfa5e4e45\x2d7d44\x2d4f15\x2d6e39\x2d92d3808e0580.mount: Deactivated successfully. May 13 00:25:13.739028 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-777e85487f4f913e52fab5288ab0822934ac95fc9b357e8131f3d911e8b19867-shm.mount: Deactivated successfully. May 13 00:25:13.739184 systemd[1]: run-netns-cni\x2de220efcb\x2dc2aa\x2d4c82\x2d0819\x2d40099486bab0.mount: Deactivated successfully. May 13 00:25:13.739321 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7952127ec289597469b958361e7d93110bd7e091f87c05cd46cb29d8be9605fd-shm.mount: Deactivated successfully. May 13 00:25:14.155027 kubelet[2678]: E0513 00:25:14.154913 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:14.522317 systemd-networkd[1240]: flannel.1: Link UP May 13 00:25:14.522335 systemd-networkd[1240]: flannel.1: Gained carrier May 13 00:25:14.598176 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:43166.service - OpenSSH per-connection server daemon (10.0.0.1:43166). May 13 00:25:14.631762 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 43166 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:14.633838 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:14.638559 systemd-logind[1531]: New session 6 of user core. May 13 00:25:14.645204 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:25:14.757355 sshd[3340]: pam_unix(sshd:session): session closed for user core May 13 00:25:14.761355 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:43166.service: Deactivated successfully. May 13 00:25:14.763740 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. May 13 00:25:14.763789 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:25:14.764994 systemd-logind[1531]: Removed session 6. May 13 00:25:15.165318 kubelet[2678]: E0513 00:25:15.165275 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:15.608071 systemd-networkd[1240]: flannel.1: Gained IPv6LL May 13 00:25:19.768163 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:43172.service - OpenSSH per-connection server daemon (10.0.0.1:43172). May 13 00:25:19.799173 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 43172 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:19.800669 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:19.804590 systemd-logind[1531]: New session 7 of user core. May 13 00:25:19.811146 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:25:19.915123 sshd[3378]: pam_unix(sshd:session): session closed for user core May 13 00:25:19.919331 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:43172.service: Deactivated successfully. May 13 00:25:19.921863 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:25:19.921921 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. May 13 00:25:19.923249 systemd-logind[1531]: Removed session 7. May 13 00:25:23.855060 kubelet[2678]: E0513 00:25:23.855015 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:23.855623 containerd[1552]: time="2025-05-13T00:25:23.855371210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srk9x,Uid:0dfd42aa-d107-478f-b4b2-7682e21388a7,Namespace:kube-system,Attempt:0,}" May 13 00:25:23.892617 systemd-networkd[1240]: cni0: Link UP May 13 00:25:23.892627 systemd-networkd[1240]: cni0: Gained carrier May 13 00:25:23.897376 systemd-networkd[1240]: cni0: Lost carrier May 13 00:25:23.900318 systemd-networkd[1240]: vethc3ea1214: Link UP May 13 00:25:23.901409 kernel: cni0: port 1(vethc3ea1214) entered blocking state May 13 00:25:23.901496 kernel: cni0: port 1(vethc3ea1214) entered disabled state May 13 00:25:23.901519 kernel: vethc3ea1214: entered allmulticast mode May 13 00:25:23.902937 kernel: vethc3ea1214: entered promiscuous mode May 13 00:25:23.904555 kernel: cni0: port 1(vethc3ea1214) entered blocking state May 13 00:25:23.905593 kernel: cni0: port 1(vethc3ea1214) entered forwarding state May 13 00:25:23.905616 kernel: cni0: port 1(vethc3ea1214) entered disabled state May 13 00:25:23.912658 kernel: cni0: port 1(vethc3ea1214) entered blocking state May 13 00:25:23.912742 kernel: cni0: port 1(vethc3ea1214) entered forwarding state May 13 00:25:23.913077 systemd-networkd[1240]: vethc3ea1214: Gained carrier May 13 00:25:23.914212 systemd-networkd[1240]: cni0: Gained carrier May 13 00:25:23.917495 containerd[1552]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} May 13 00:25:23.917495 containerd[1552]: delegateAdd: netconf sent to delegate plugin: May 13 00:25:23.936504 containerd[1552]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:25:23.935647316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:25:23.936504 containerd[1552]: time="2025-05-13T00:25:23.936490753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:25:23.936504 containerd[1552]: time="2025-05-13T00:25:23.936511342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:23.936714 containerd[1552]: time="2025-05-13T00:25:23.936646536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:23.972196 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:25:24.003351 containerd[1552]: time="2025-05-13T00:25:24.003299278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srk9x,Uid:0dfd42aa-d107-478f-b4b2-7682e21388a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"40e815658d23a1fe50355542da491f534b32abd2cccf6b6ff13249715f4d0120\"" May 13 00:25:24.004064 kubelet[2678]: E0513 00:25:24.004042 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:24.008484 containerd[1552]: time="2025-05-13T00:25:24.008351251Z" level=info msg="CreateContainer within sandbox \"40e815658d23a1fe50355542da491f534b32abd2cccf6b6ff13249715f4d0120\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:25:24.022008 containerd[1552]: time="2025-05-13T00:25:24.021965102Z" level=info msg="CreateContainer within sandbox \"40e815658d23a1fe50355542da491f534b32abd2cccf6b6ff13249715f4d0120\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"786fa4aaf6e934a9ed84e2e5f47989fd92c134b7b74ac9fe48f6901a5bf3ed35\"" May 13 00:25:24.025107 containerd[1552]: time="2025-05-13T00:25:24.025073771Z" level=info msg="StartContainer for \"786fa4aaf6e934a9ed84e2e5f47989fd92c134b7b74ac9fe48f6901a5bf3ed35\"" May 13 00:25:24.080323 containerd[1552]: time="2025-05-13T00:25:24.080288241Z" level=info msg="StartContainer for \"786fa4aaf6e934a9ed84e2e5f47989fd92c134b7b74ac9fe48f6901a5bf3ed35\" returns successfully" May 13 00:25:24.185187 kubelet[2678]: E0513 00:25:24.185077 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:24.197097 kubelet[2678]: I0513 00:25:24.197046 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-v7kh7" podStartSLOduration=12.223915931 podStartE2EDuration="17.197033523s" podCreationTimestamp="2025-05-13 00:25:07 +0000 UTC" firstStartedPulling="2025-05-13 00:25:07.598513182 +0000 UTC m=+15.819186452" lastFinishedPulling="2025-05-13 00:25:12.571630774 +0000 UTC m=+20.792304044" observedRunningTime="2025-05-13 00:25:14.318073126 +0000 UTC m=+22.538746396" watchObservedRunningTime="2025-05-13 00:25:24.197033523 +0000 UTC m=+32.417706783" May 13 00:25:24.197349 kubelet[2678]: I0513 00:25:24.197139 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-srk9x" podStartSLOduration=17.197126679 podStartE2EDuration="17.197126679s" podCreationTimestamp="2025-05-13 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:25:24.196446269 +0000 UTC m=+32.417119549" watchObservedRunningTime="2025-05-13 00:25:24.197126679 +0000 UTC m=+32.417799949" May 13 00:25:24.884577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710542014.mount: Deactivated successfully. May 13 00:25:24.922142 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). May 13 00:25:24.951998 systemd-networkd[1240]: vethc3ea1214: Gained IPv6LL May 13 00:25:24.956548 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:24.958071 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:24.962198 systemd-logind[1531]: New session 8 of user core. May 13 00:25:24.973203 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:25:25.077909 sshd[3536]: pam_unix(sshd:session): session closed for user core May 13 00:25:25.085197 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:33788.service - OpenSSH per-connection server daemon (10.0.0.1:33788). May 13 00:25:25.085928 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:33778.service: Deactivated successfully. May 13 00:25:25.088012 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:25:25.090125 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. May 13 00:25:25.091188 systemd-logind[1531]: Removed session 8. May 13 00:25:25.117650 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 33788 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:25.119156 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:25.122931 systemd-logind[1531]: New session 9 of user core. May 13 00:25:25.131122 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:25:25.260930 sshd[3551]: pam_unix(sshd:session): session closed for user core May 13 00:25:25.272240 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:33796.service - OpenSSH per-connection server daemon (10.0.0.1:33796). May 13 00:25:25.272840 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:33788.service: Deactivated successfully. May 13 00:25:25.275581 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:25:25.277339 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. May 13 00:25:25.278843 systemd-logind[1531]: Removed session 9. May 13 00:25:25.306055 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 33796 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:25.307564 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:25.311836 systemd-logind[1531]: New session 10 of user core. May 13 00:25:25.322332 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:25:25.432007 sshd[3564]: pam_unix(sshd:session): session closed for user core May 13 00:25:25.436792 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:33796.service: Deactivated successfully. May 13 00:25:25.439535 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:25:25.440298 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. May 13 00:25:25.441508 systemd-logind[1531]: Removed session 10. May 13 00:25:25.592030 systemd-networkd[1240]: cni0: Gained IPv6LL May 13 00:25:27.856948 kubelet[2678]: E0513 00:25:27.856908 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:27.858546 containerd[1552]: time="2025-05-13T00:25:27.858506094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6m8mn,Uid:7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf,Namespace:kube-system,Attempt:0,}" May 13 00:25:27.880145 systemd-networkd[1240]: vethabaf9aa8: Link UP May 13 00:25:27.882181 kernel: cni0: port 2(vethabaf9aa8) entered blocking state May 13 00:25:27.882245 kernel: cni0: port 2(vethabaf9aa8) entered disabled state May 13 00:25:27.883019 kernel: vethabaf9aa8: entered allmulticast mode May 13 00:25:27.884000 kernel: vethabaf9aa8: entered promiscuous mode May 13 00:25:27.889193 kernel: cni0: port 2(vethabaf9aa8) entered blocking state May 13 00:25:27.889259 kernel: cni0: port 2(vethabaf9aa8) entered forwarding state May 13 00:25:27.889253 systemd-networkd[1240]: vethabaf9aa8: Gained carrier May 13 00:25:27.891101 containerd[1552]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} May 13 00:25:27.891101 containerd[1552]: delegateAdd: netconf sent to delegate plugin: May 13 00:25:27.909476 containerd[1552]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:25:27.909377565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:25:27.909476 containerd[1552]: time="2025-05-13T00:25:27.909429984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:25:27.909476 containerd[1552]: time="2025-05-13T00:25:27.909440243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:27.909685 containerd[1552]: time="2025-05-13T00:25:27.909530002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:27.932747 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:25:27.957172 containerd[1552]: time="2025-05-13T00:25:27.957133408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6m8mn,Uid:7af7020e-bcdf-4b6d-8de2-3ac3b36a45bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0011914b5ff5fd5230ca5fb92eb1bf2c11e16cc69ce619a51f6d89ffbb4a341\"" May 13 00:25:27.957813 kubelet[2678]: E0513 00:25:27.957789 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:27.961284 containerd[1552]: time="2025-05-13T00:25:27.961252132Z" level=info msg="CreateContainer within sandbox \"d0011914b5ff5fd5230ca5fb92eb1bf2c11e16cc69ce619a51f6d89ffbb4a341\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:25:27.980112 containerd[1552]: time="2025-05-13T00:25:27.980064524Z" level=info msg="CreateContainer within sandbox \"d0011914b5ff5fd5230ca5fb92eb1bf2c11e16cc69ce619a51f6d89ffbb4a341\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4921bc908ea6902798383c9e7b62277aaa472103c1ed614a419a07caf18d835a\"" May 13 00:25:27.980778 containerd[1552]: time="2025-05-13T00:25:27.980522796Z" level=info msg="StartContainer for \"4921bc908ea6902798383c9e7b62277aaa472103c1ed614a419a07caf18d835a\"" May 13 00:25:28.038563 containerd[1552]: time="2025-05-13T00:25:28.038517670Z" level=info msg="StartContainer for \"4921bc908ea6902798383c9e7b62277aaa472103c1ed614a419a07caf18d835a\" returns successfully" May 13 00:25:28.190744 kubelet[2678]: E0513 00:25:28.190619 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:28.213166 kubelet[2678]: I0513 00:25:28.213099 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6m8mn" podStartSLOduration=21.213083196 podStartE2EDuration="21.213083196s" podCreationTimestamp="2025-05-13 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:25:28.200670281 +0000 UTC m=+36.421343561" watchObservedRunningTime="2025-05-13 00:25:28.213083196 +0000 UTC m=+36.433756486" May 13 00:25:29.192676 kubelet[2678]: E0513 00:25:29.192635 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:29.560073 systemd-networkd[1240]: vethabaf9aa8: Gained IPv6LL May 13 00:25:30.194307 kubelet[2678]: E0513 00:25:30.194277 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:30.442141 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:33808.service - OpenSSH per-connection server daemon (10.0.0.1:33808). May 13 00:25:30.476359 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 33808 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:30.478331 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:30.482593 systemd-logind[1531]: New session 11 of user core. May 13 00:25:30.500421 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:25:30.625002 sshd[3724]: pam_unix(sshd:session): session closed for user core May 13 00:25:30.629913 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:33808.service: Deactivated successfully. May 13 00:25:30.632840 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:25:30.633742 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. May 13 00:25:30.634854 systemd-logind[1531]: Removed session 11. May 13 00:25:32.985393 kubelet[2678]: E0513 00:25:32.985243 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:33.199815 kubelet[2678]: E0513 00:25:33.199778 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:35.641273 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). May 13 00:25:35.672596 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:35.674255 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:35.678489 systemd-logind[1531]: New session 12 of user core. May 13 00:25:35.692302 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:25:35.796335 sshd[3764]: pam_unix(sshd:session): session closed for user core May 13 00:25:35.800497 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:41202.service: Deactivated successfully. May 13 00:25:35.803178 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:25:35.803946 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. May 13 00:25:35.804890 systemd-logind[1531]: Removed session 12. May 13 00:25:40.808225 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:41214.service - OpenSSH per-connection server daemon (10.0.0.1:41214). May 13 00:25:40.840062 sshd[3802]: Accepted publickey for core from 10.0.0.1 port 41214 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:40.841820 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:40.845684 systemd-logind[1531]: New session 13 of user core. May 13 00:25:40.854201 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:25:40.967330 sshd[3802]: pam_unix(sshd:session): session closed for user core May 13 00:25:40.971555 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:41214.service: Deactivated successfully. May 13 00:25:40.973920 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. May 13 00:25:40.974157 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:25:40.975231 systemd-logind[1531]: Removed session 13. May 13 00:25:45.982187 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:57508.service - OpenSSH per-connection server daemon (10.0.0.1:57508). May 13 00:25:46.014273 sshd[3838]: Accepted publickey for core from 10.0.0.1 port 57508 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:46.015862 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:46.022374 systemd-logind[1531]: New session 14 of user core. May 13 00:25:46.036284 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:25:46.141920 sshd[3838]: pam_unix(sshd:session): session closed for user core May 13 00:25:46.150100 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). May 13 00:25:46.150576 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:57508.service: Deactivated successfully. May 13 00:25:46.154244 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. May 13 00:25:46.155051 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:25:46.156157 systemd-logind[1531]: Removed session 14. May 13 00:25:46.184014 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:46.185482 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:46.189799 systemd-logind[1531]: New session 15 of user core. May 13 00:25:46.202156 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:25:46.371688 sshd[3851]: pam_unix(sshd:session): session closed for user core May 13 00:25:46.380200 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). May 13 00:25:46.380759 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:57524.service: Deactivated successfully. May 13 00:25:46.384613 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:25:46.385529 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. May 13 00:25:46.386546 systemd-logind[1531]: Removed session 15. May 13 00:25:46.419284 sshd[3865]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:46.421177 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:46.425668 systemd-logind[1531]: New session 16 of user core. May 13 00:25:46.435133 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:25:48.408317 sshd[3865]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.417234 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:57550.service - OpenSSH per-connection server daemon (10.0.0.1:57550). May 13 00:25:48.417785 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:57536.service: Deactivated successfully. May 13 00:25:48.421536 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. May 13 00:25:48.422553 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:25:48.424502 systemd-logind[1531]: Removed session 16. May 13 00:25:48.451769 sshd[3886]: Accepted publickey for core from 10.0.0.1 port 57550 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:48.453255 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:48.457428 systemd-logind[1531]: New session 17 of user core. May 13 00:25:48.468168 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:25:48.686035 sshd[3886]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.694278 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:57554.service - OpenSSH per-connection server daemon (10.0.0.1:57554). May 13 00:25:48.695692 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:57550.service: Deactivated successfully. May 13 00:25:48.699798 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:25:48.700490 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. May 13 00:25:48.701617 systemd-logind[1531]: Removed session 17. May 13 00:25:48.727639 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 57554 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:48.729504 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:48.734295 systemd-logind[1531]: New session 18 of user core. May 13 00:25:48.742140 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:25:48.854987 sshd[3900]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.859699 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:57554.service: Deactivated successfully. May 13 00:25:48.862109 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. May 13 00:25:48.862558 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:25:48.863543 systemd-logind[1531]: Removed session 18. May 13 00:25:53.870117 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:57340.service - OpenSSH per-connection server daemon (10.0.0.1:57340). May 13 00:25:53.901000 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 57340 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:53.902478 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:53.906228 systemd-logind[1531]: New session 19 of user core. May 13 00:25:53.917169 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:25:54.016769 sshd[3944]: pam_unix(sshd:session): session closed for user core May 13 00:25:54.020958 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:57340.service: Deactivated successfully. May 13 00:25:54.023343 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. May 13 00:25:54.023432 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:25:54.024620 systemd-logind[1531]: Removed session 19. May 13 00:25:59.032090 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:57348.service - OpenSSH per-connection server daemon (10.0.0.1:57348). May 13 00:25:59.067087 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 57348 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:25:59.068579 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:59.072030 systemd-logind[1531]: New session 20 of user core. May 13 00:25:59.083133 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:25:59.182369 sshd[3980]: pam_unix(sshd:session): session closed for user core May 13 00:25:59.186289 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:57348.service: Deactivated successfully. May 13 00:25:59.188507 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. May 13 00:25:59.188553 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:25:59.189696 systemd-logind[1531]: Removed session 20. May 13 00:26:04.194115 systemd[1]: Started sshd@20-10.0.0.62:22-10.0.0.1:41064.service - OpenSSH per-connection server daemon (10.0.0.1:41064). May 13 00:26:04.225280 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 41064 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:26:04.226871 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:26:04.230830 systemd-logind[1531]: New session 21 of user core. May 13 00:26:04.237390 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:26:04.344387 sshd[4016]: pam_unix(sshd:session): session closed for user core May 13 00:26:04.348751 systemd[1]: sshd@20-10.0.0.62:22-10.0.0.1:41064.service: Deactivated successfully. May 13 00:26:04.351093 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. May 13 00:26:04.351180 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:26:04.352266 systemd-logind[1531]: Removed session 21. May 13 00:26:09.360145 systemd[1]: Started sshd@21-10.0.0.62:22-10.0.0.1:41080.service - OpenSSH per-connection server daemon (10.0.0.1:41080). May 13 00:26:09.391249 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 41080 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:26:09.392756 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:26:09.396807 systemd-logind[1531]: New session 22 of user core. May 13 00:26:09.412318 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:26:09.515469 sshd[4054]: pam_unix(sshd:session): session closed for user core May 13 00:26:09.519215 systemd[1]: sshd@21-10.0.0.62:22-10.0.0.1:41080.service: Deactivated successfully. May 13 00:26:09.521564 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. May 13 00:26:09.521696 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:26:09.522923 systemd-logind[1531]: Removed session 22.