May 15 23:52:24.397774 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 15 23:52:24.397814 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:52:24.397831 kernel: BIOS-provided physical RAM map: May 15 23:52:24.397839 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 23:52:24.397847 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 23:52:24.397855 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 23:52:24.397864 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 23:52:24.397872 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 23:52:24.397880 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 23:52:24.397892 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 23:52:24.397900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:52:24.397908 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 23:52:24.397926 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:52:24.397935 kernel: NX (Execute Disable) protection: active May 15 23:52:24.397946 kernel: APIC: Static calls initialized May 15 23:52:24.397964 kernel: SMBIOS 2.8 present. May 15 23:52:24.397974 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 23:52:24.397984 kernel: Hypervisor detected: KVM May 15 23:52:24.397995 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 23:52:24.398005 kernel: kvm-clock: using sched offset of 4950628684 cycles May 15 23:52:24.398016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 23:52:24.398027 kernel: tsc: Detected 2794.748 MHz processor May 15 23:52:24.398038 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 23:52:24.398049 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 23:52:24.398065 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 23:52:24.398076 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 23:52:24.398087 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 23:52:24.398115 kernel: Using GB pages for direct mapping May 15 23:52:24.398126 kernel: ACPI: Early table checksum verification disabled May 15 23:52:24.398137 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 23:52:24.398148 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398158 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398169 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398185 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 23:52:24.398197 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398217 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398234 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398245 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:52:24.398255 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 23:52:24.398266 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 23:52:24.398284 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 23:52:24.398298 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 23:52:24.398309 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 23:52:24.398320 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 23:52:24.398332 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 23:52:24.398343 kernel: No NUMA configuration found May 15 23:52:24.398354 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 23:52:24.398370 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 15 23:52:24.398381 kernel: Zone ranges: May 15 23:52:24.398392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 23:52:24.398403 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 23:52:24.398414 kernel: Normal empty May 15 23:52:24.398425 kernel: Movable zone start for each node May 15 23:52:24.398435 kernel: Early memory node ranges May 15 23:52:24.398446 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 23:52:24.398457 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 23:52:24.398468 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 23:52:24.398483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:52:24.398500 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 23:52:24.398511 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 23:52:24.398522 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 23:52:24.398533 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 23:52:24.398544 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 23:52:24.398555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 23:52:24.398566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 23:52:24.398577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 23:52:24.398593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 23:52:24.398604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 23:52:24.398615 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 23:52:24.398625 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 23:52:24.398636 kernel: TSC deadline timer available May 15 23:52:24.398646 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 23:52:24.398658 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 23:52:24.398669 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 23:52:24.398684 kernel: kvm-guest: setup PV sched yield May 15 23:52:24.398700 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 23:52:24.398710 kernel: Booting paravirtualized kernel on KVM May 15 23:52:24.398721 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 23:52:24.398732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 23:52:24.398923 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 23:52:24.398946 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 23:52:24.398958 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 23:52:24.398969 kernel: kvm-guest: PV spinlocks enabled May 15 23:52:24.398979 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 23:52:24.398998 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:52:24.399010 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:52:24.399020 kernel: random: crng init done May 15 23:52:24.399030 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:52:24.399041 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:52:24.399051 kernel: Fallback order for Node 0: 0 May 15 23:52:24.399061 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 15 23:52:24.399071 kernel: Policy zone: DMA32 May 15 23:52:24.399086 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:52:24.399246 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 136900K reserved, 0K cma-reserved) May 15 23:52:24.399257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:52:24.399267 kernel: ftrace: allocating 37950 entries in 149 pages May 15 23:52:24.399278 kernel: ftrace: allocated 149 pages with 4 groups May 15 23:52:24.399288 kernel: Dynamic Preempt: voluntary May 15 23:52:24.399298 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:52:24.399310 kernel: rcu: RCU event tracing is enabled. May 15 23:52:24.399321 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:52:24.399338 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:52:24.399350 kernel: Rude variant of Tasks RCU enabled. May 15 23:52:24.399361 kernel: Tracing variant of Tasks RCU enabled. May 15 23:52:24.399372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:52:24.399387 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:52:24.399398 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 23:52:24.399409 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:52:24.399420 kernel: Console: colour VGA+ 80x25 May 15 23:52:24.399431 kernel: printk: console [ttyS0] enabled May 15 23:52:24.399443 kernel: ACPI: Core revision 20230628 May 15 23:52:24.399459 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 23:52:24.399471 kernel: APIC: Switch to symmetric I/O mode setup May 15 23:52:24.399482 kernel: x2apic enabled May 15 23:52:24.399493 kernel: APIC: Switched APIC routing to: physical x2apic May 15 23:52:24.399505 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 23:52:24.399516 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 23:52:24.399528 kernel: kvm-guest: setup PV IPIs May 15 23:52:24.399557 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 23:52:24.399568 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 23:52:24.399579 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 23:52:24.399590 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 23:52:24.399605 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 23:52:24.399616 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 23:52:24.399627 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 23:52:24.399638 kernel: Spectre V2 : Mitigation: Retpolines May 15 23:52:24.399649 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 23:52:24.399664 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 23:52:24.399674 kernel: RETBleed: Mitigation: untrained return thunk May 15 23:52:24.399690 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 23:52:24.399702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 23:52:24.399714 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 23:52:24.399726 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 23:52:24.399738 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 23:52:24.399749 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 23:52:24.399766 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 23:52:24.399777 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 23:52:24.399788 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 23:52:24.399799 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 23:52:24.399811 kernel: Freeing SMP alternatives memory: 32K May 15 23:52:24.399822 kernel: pid_max: default: 32768 minimum: 301 May 15 23:52:24.399833 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:52:24.399844 kernel: landlock: Up and running. May 15 23:52:24.399855 kernel: SELinux: Initializing. May 15 23:52:24.399870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:52:24.399881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:52:24.399892 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 23:52:24.399903 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:52:24.399915 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:52:24.399926 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:52:24.399938 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 23:52:24.399950 kernel: ... version: 0 May 15 23:52:24.399966 kernel: ... bit width: 48 May 15 23:52:24.399983 kernel: ... generic registers: 6 May 15 23:52:24.399995 kernel: ... value mask: 0000ffffffffffff May 15 23:52:24.400007 kernel: ... max period: 00007fffffffffff May 15 23:52:24.400018 kernel: ... fixed-purpose events: 0 May 15 23:52:24.400030 kernel: ... event mask: 000000000000003f May 15 23:52:24.400042 kernel: signal: max sigframe size: 1776 May 15 23:52:24.400052 kernel: rcu: Hierarchical SRCU implementation. May 15 23:52:24.400064 kernel: rcu: Max phase no-delay instances is 400. May 15 23:52:24.400075 kernel: smp: Bringing up secondary CPUs ... May 15 23:52:24.400107 kernel: smpboot: x86: Booting SMP configuration: May 15 23:52:24.400119 kernel: .... node #0, CPUs: #1 #2 #3 May 15 23:52:24.400130 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:52:24.400142 kernel: smpboot: Max logical packages: 1 May 15 23:52:24.400154 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 23:52:24.400165 kernel: devtmpfs: initialized May 15 23:52:24.400177 kernel: x86/mm: Memory block size: 128MB May 15 23:52:24.400189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:52:24.400201 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:52:24.400230 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:52:24.400242 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:52:24.400253 kernel: audit: initializing netlink subsys (disabled) May 15 23:52:24.400265 kernel: audit: type=2000 audit(1747353142.952:1): state=initialized audit_enabled=0 res=1 May 15 23:52:24.400276 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:52:24.400288 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 23:52:24.400300 kernel: cpuidle: using governor menu May 15 23:52:24.400312 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:52:24.400324 kernel: dca service started, version 1.12.1 May 15 23:52:24.400340 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 23:52:24.400352 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 23:52:24.400364 kernel: PCI: Using configuration type 1 for base access May 15 23:52:24.400376 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 23:52:24.400388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:52:24.400399 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:52:24.400412 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:52:24.400424 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:52:24.400435 kernel: ACPI: Added _OSI(Module Device) May 15 23:52:24.400451 kernel: ACPI: Added _OSI(Processor Device) May 15 23:52:24.400462 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:52:24.400472 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:52:24.400483 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:52:24.400494 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 23:52:24.400505 kernel: ACPI: Interpreter enabled May 15 23:52:24.400517 kernel: ACPI: PM: (supports S0 S3 S5) May 15 23:52:24.400528 kernel: ACPI: Using IOAPIC for interrupt routing May 15 23:52:24.400540 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 23:52:24.400557 kernel: PCI: Using E820 reservations for host bridge windows May 15 23:52:24.400568 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 23:52:24.400580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:52:24.400950 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:52:24.401157 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 23:52:24.401349 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 23:52:24.401368 kernel: PCI host bridge to bus 0000:00 May 15 23:52:24.401581 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 23:52:24.401766 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 23:52:24.401939 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 23:52:24.402133 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 23:52:24.402305 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 23:52:24.402464 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 23:52:24.402637 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:52:24.402890 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 23:52:24.403130 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 23:52:24.403328 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 15 23:52:24.403517 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 15 23:52:24.403697 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 15 23:52:24.403882 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 23:52:24.404112 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:52:24.404318 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 15 23:52:24.404496 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 15 23:52:24.404669 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 15 23:52:24.404897 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 23:52:24.405086 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 15 23:52:24.405312 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 15 23:52:24.405496 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 15 23:52:24.405709 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 23:52:24.405883 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 15 23:52:24.406061 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 15 23:52:24.406281 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 23:52:24.406455 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 15 23:52:24.406657 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 23:52:24.406851 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 23:52:24.407078 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 23:52:24.407310 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 15 23:52:24.407505 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 15 23:52:24.407711 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 23:52:24.407884 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 15 23:52:24.407903 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 23:52:24.407921 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 23:52:24.407934 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 23:52:24.407946 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 23:52:24.407958 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 23:52:24.407970 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 23:52:24.407982 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 23:52:24.407994 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 23:52:24.408006 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 23:52:24.408018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 23:52:24.408035 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 23:52:24.408047 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 23:52:24.408058 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 23:52:24.408070 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 23:52:24.408081 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 23:52:24.408109 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 23:52:24.408121 kernel: iommu: Default domain type: Translated May 15 23:52:24.408132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 23:52:24.408143 kernel: PCI: Using ACPI for IRQ routing May 15 23:52:24.408161 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 23:52:24.408174 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 23:52:24.408185 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 23:52:24.408389 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 23:52:24.408582 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 23:52:24.408778 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 23:52:24.408799 kernel: vgaarb: loaded May 15 23:52:24.408811 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 23:52:24.408830 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 23:52:24.408844 kernel: clocksource: Switched to clocksource kvm-clock May 15 23:52:24.408859 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:52:24.408873 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:52:24.408887 kernel: pnp: PnP ACPI init May 15 23:52:24.409134 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 23:52:24.409155 kernel: pnp: PnP ACPI: found 6 devices May 15 23:52:24.409168 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 23:52:24.409180 kernel: NET: Registered PF_INET protocol family May 15 23:52:24.409198 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:52:24.409220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:52:24.409233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:52:24.409244 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:52:24.409256 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:52:24.409268 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:52:24.409280 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:52:24.409291 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:52:24.409307 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:52:24.409320 kernel: NET: Registered PF_XDP protocol family May 15 23:52:24.409509 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 23:52:24.409680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 23:52:24.409853 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 23:52:24.410068 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 23:52:24.410269 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 23:52:24.410443 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 23:52:24.410462 kernel: PCI: CLS 0 bytes, default 64 May 15 23:52:24.410481 kernel: Initialise system trusted keyrings May 15 23:52:24.410493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:52:24.410506 kernel: Key type asymmetric registered May 15 23:52:24.410518 kernel: Asymmetric key parser 'x509' registered May 15 23:52:24.410530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 23:52:24.410543 kernel: io scheduler mq-deadline registered May 15 23:52:24.410555 kernel: io scheduler kyber registered May 15 23:52:24.410567 kernel: io scheduler bfq registered May 15 23:52:24.410579 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 23:52:24.410597 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 23:52:24.410610 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 23:52:24.410622 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 23:52:24.410634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:52:24.410647 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 23:52:24.410659 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 23:52:24.410671 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 23:52:24.410683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 23:52:24.410899 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 23:52:24.410926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 23:52:24.411118 kernel: rtc_cmos 00:04: registered as rtc0 May 15 23:52:24.411299 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T23:52:23 UTC (1747353143) May 15 23:52:24.411470 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 23:52:24.411489 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 23:52:24.411501 kernel: NET: Registered PF_INET6 protocol family May 15 23:52:24.411512 kernel: Segment Routing with IPv6 May 15 23:52:24.411524 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:52:24.411542 kernel: NET: Registered PF_PACKET protocol family May 15 23:52:24.411554 kernel: Key type dns_resolver registered May 15 23:52:24.411566 kernel: IPI shorthand broadcast: enabled May 15 23:52:24.411577 kernel: sched_clock: Marking stable (1022003890, 285615737)->(1665615449, -357995822) May 15 23:52:24.411589 kernel: registered taskstats version 1 May 15 23:52:24.411601 kernel: Loading compiled-in X.509 certificates May 15 23:52:24.411613 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 15 23:52:24.411625 kernel: Key type .fscrypt registered May 15 23:52:24.411636 kernel: Key type fscrypt-provisioning registered May 15 23:52:24.411652 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:52:24.411664 kernel: ima: Allocated hash algorithm: sha1 May 15 23:52:24.411676 kernel: ima: No architecture policies found May 15 23:52:24.411688 kernel: clk: Disabling unused clocks May 15 23:52:24.411699 kernel: Freeing unused kernel image (initmem) memory: 42988K May 15 23:52:24.411711 kernel: Write protecting the kernel read-only data: 36864k May 15 23:52:24.411723 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 15 23:52:24.411734 kernel: Run /init as init process May 15 23:52:24.411750 kernel: with arguments: May 15 23:52:24.411761 kernel: /init May 15 23:52:24.411773 kernel: with environment: May 15 23:52:24.411784 kernel: HOME=/ May 15 23:52:24.411795 kernel: TERM=linux May 15 23:52:24.411807 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:52:24.411821 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:52:24.411837 systemd[1]: Detected virtualization kvm. May 15 23:52:24.411854 systemd[1]: Detected architecture x86-64. May 15 23:52:24.411866 systemd[1]: Running in initrd. May 15 23:52:24.411879 systemd[1]: No hostname configured, using default hostname. May 15 23:52:24.411891 systemd[1]: Hostname set to . May 15 23:52:24.411903 systemd[1]: Initializing machine ID from VM UUID. May 15 23:52:24.411914 systemd[1]: Queued start job for default target initrd.target. May 15 23:52:24.411926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:52:24.411938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:52:24.411955 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:52:24.411968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:52:24.411997 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:52:24.412014 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:52:24.412029 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:52:24.412045 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:52:24.412059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:52:24.412072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:52:24.412084 systemd[1]: Reached target paths.target - Path Units. May 15 23:52:24.412116 systemd[1]: Reached target slices.target - Slice Units. May 15 23:52:24.412129 systemd[1]: Reached target swap.target - Swaps. May 15 23:52:24.412141 systemd[1]: Reached target timers.target - Timer Units. May 15 23:52:24.412155 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:52:24.412173 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:52:24.412186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:52:24.412199 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:52:24.412221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:52:24.412235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:52:24.412247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:52:24.412259 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:52:24.412273 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:52:24.412286 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:52:24.412305 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:52:24.412319 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:52:24.412332 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:52:24.412345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:52:24.412358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:52:24.412371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:52:24.412384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:52:24.412433 systemd-journald[194]: Collecting audit messages is disabled. May 15 23:52:24.412470 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:52:24.412484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:52:24.412502 systemd-journald[194]: Journal started May 15 23:52:24.412530 systemd-journald[194]: Runtime Journal (/run/log/journal/faa979cf8dad4b0a9c5d8479af0a1ac6) is 6.0M, max 48.4M, 42.3M free. May 15 23:52:24.445155 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:52:24.445199 systemd-modules-load[195]: Inserted module 'overlay' May 15 23:52:24.452364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:52:24.511630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:52:24.511674 kernel: Bridge firewalling registered May 15 23:52:24.459199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:52:24.486530 systemd-modules-load[195]: Inserted module 'br_netfilter' May 15 23:52:24.518073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:52:24.518887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:52:24.541377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:52:24.587242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:52:24.590190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:52:24.591149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:52:24.610041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:52:24.686481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:52:24.694456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:52:24.695359 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:52:24.699712 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:52:24.718336 dracut-cmdline[231]: dracut-dracut-053 May 15 23:52:24.722837 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 15 23:52:24.742953 systemd-resolved[227]: Positive Trust Anchors: May 15 23:52:24.742973 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:52:24.743021 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:52:24.746749 systemd-resolved[227]: Defaulting to hostname 'linux'. May 15 23:52:24.748379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:52:24.799375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:52:24.932163 kernel: SCSI subsystem initialized May 15 23:52:24.943136 kernel: Loading iSCSI transport class v2.0-870. May 15 23:52:24.955136 kernel: iscsi: registered transport (tcp) May 15 23:52:25.036136 kernel: iscsi: registered transport (qla4xxx) May 15 23:52:25.036357 kernel: QLogic iSCSI HBA Driver May 15 23:52:25.094805 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:52:25.134432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:52:25.195179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:52:25.195289 kernel: device-mapper: uevent: version 1.0.3 May 15 23:52:25.198134 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:52:25.280158 kernel: raid6: avx2x4 gen() 24377 MB/s May 15 23:52:25.297147 kernel: raid6: avx2x2 gen() 28016 MB/s May 15 23:52:25.374146 kernel: raid6: avx2x1 gen() 23167 MB/s May 15 23:52:25.374277 kernel: raid6: using algorithm avx2x2 gen() 28016 MB/s May 15 23:52:25.442157 kernel: raid6: .... xor() 15495 MB/s, rmw enabled May 15 23:52:25.442276 kernel: raid6: using avx2x2 recovery algorithm May 15 23:52:25.469163 kernel: xor: automatically using best checksumming function avx May 15 23:52:25.641138 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:52:25.662253 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:52:25.710393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:52:25.726281 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 15 23:52:25.762045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:52:25.767261 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:52:25.787729 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 15 23:52:25.837290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:52:25.861419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:52:25.936572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:52:25.975413 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 23:52:25.974020 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:52:25.987878 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:52:25.988117 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:52:25.988130 kernel: GPT:9289727 != 19775487 May 15 23:52:25.988140 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:52:25.988150 kernel: GPT:9289727 != 19775487 May 15 23:52:25.988169 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:52:25.988180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:52:25.991676 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:52:26.017907 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:52:26.018822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:52:26.019490 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:52:26.036554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:52:26.038675 kernel: cryptd: max_cpu_qlen set to 1000 May 15 23:52:26.048138 kernel: libata version 3.00 loaded. May 15 23:52:26.060736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:52:26.079023 kernel: AVX2 version of gcm_enc/dec engaged. May 15 23:52:26.081125 kernel: AES CTR mode by8 optimization enabled May 15 23:52:26.083262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:52:26.083350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:52:26.086006 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:52:26.086453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:52:26.086511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:52:26.098416 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (458) May 15 23:52:26.086878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:52:26.100111 kernel: ahci 0000:00:1f.2: version 3.0 May 15 23:52:26.100332 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 23:52:26.101372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:52:26.142449 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 23:52:26.142678 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 23:52:26.142861 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) May 15 23:52:26.114408 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:52:26.145587 kernel: scsi host0: ahci May 15 23:52:26.147145 kernel: scsi host1: ahci May 15 23:52:26.147449 kernel: scsi host2: ahci May 15 23:52:26.152254 kernel: scsi host3: ahci May 15 23:52:26.152516 kernel: scsi host4: ahci May 15 23:52:26.153200 kernel: scsi host5: ahci May 15 23:52:26.155181 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 15 23:52:26.155234 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 15 23:52:26.155251 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 15 23:52:26.155623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:52:26.163180 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 15 23:52:26.163212 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 15 23:52:26.163226 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 15 23:52:26.181510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:52:26.224471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:52:26.230113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:52:26.244293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:52:26.245041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:52:26.247380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:52:26.274563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:52:26.297822 disk-uuid[568]: Primary Header is updated. May 15 23:52:26.297822 disk-uuid[568]: Secondary Entries is updated. May 15 23:52:26.297822 disk-uuid[568]: Secondary Header is updated. May 15 23:52:26.304141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:52:26.309195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:52:26.471077 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 23:52:26.471202 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 23:52:26.471236 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 23:52:26.471250 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 23:52:26.472113 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 23:52:26.473114 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 23:52:26.474128 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 23:52:26.475447 kernel: ata3.00: applying bridge limits May 15 23:52:26.475480 kernel: ata3.00: configured for UDMA/100 May 15 23:52:26.476134 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 23:52:26.520182 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 23:52:26.520589 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 23:52:26.534199 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 23:52:27.313168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:52:27.313864 disk-uuid[578]: The operation has completed successfully. May 15 23:52:27.363689 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:52:27.363880 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:52:27.383448 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:52:27.390357 sh[593]: Success May 15 23:52:27.417143 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 23:52:27.472014 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:52:27.494395 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:52:27.496996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:52:27.540724 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 15 23:52:27.540801 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 23:52:27.540819 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:52:27.541937 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:52:27.542892 kernel: BTRFS info (device dm-0): using free space tree May 15 23:52:27.555186 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:52:27.556818 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:52:27.570522 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:52:27.572839 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:52:27.596224 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:52:27.596307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:52:27.596325 kernel: BTRFS info (device vda6): using free space tree May 15 23:52:27.601610 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:52:27.615713 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 23:52:27.619228 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:52:27.661883 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:52:27.671367 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:52:27.871407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:52:28.028680 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:52:28.048623 ignition[699]: Ignition 2.20.0 May 15 23:52:28.048640 ignition[699]: Stage: fetch-offline May 15 23:52:28.048730 ignition[699]: no configs at "/usr/lib/ignition/base.d" May 15 23:52:28.048745 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:28.048872 ignition[699]: parsed url from cmdline: "" May 15 23:52:28.048877 ignition[699]: no config URL provided May 15 23:52:28.048884 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:52:28.048897 ignition[699]: no config at "/usr/lib/ignition/user.ign" May 15 23:52:28.048938 ignition[699]: op(1): [started] loading QEMU firmware config module May 15 23:52:28.057246 systemd-networkd[779]: lo: Link UP May 15 23:52:28.048945 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:52:28.057251 systemd-networkd[779]: lo: Gained carrier May 15 23:52:28.059175 systemd-networkd[779]: Enumeration completed May 15 23:52:28.059590 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:52:28.059595 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:52:28.060588 systemd-networkd[779]: eth0: Link UP May 15 23:52:28.060594 systemd-networkd[779]: eth0: Gained carrier May 15 23:52:28.060603 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:52:28.061555 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:52:28.075062 ignition[699]: op(1): [finished] loading QEMU firmware config module May 15 23:52:28.070615 systemd[1]: Reached target network.target - Network. May 15 23:52:28.089298 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:52:28.096186 ignition[699]: parsing config with SHA512: 7253d17ebd21a94bb7344e2aeff2d150d77af909e42e959d06372388c0d84df0fab80e21ec811fda9f2a5438d8b3cf03abed03521d82d8ed5af6324573155a2b May 15 23:52:28.114647 unknown[699]: fetched base config from "system" May 15 23:52:28.114667 unknown[699]: fetched user config from "qemu" May 15 23:52:28.115110 ignition[699]: fetch-offline: fetch-offline passed May 15 23:52:28.117870 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:52:28.115200 ignition[699]: Ignition finished successfully May 15 23:52:28.119646 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:52:28.127387 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:52:28.149501 ignition[785]: Ignition 2.20.0 May 15 23:52:28.149517 ignition[785]: Stage: kargs May 15 23:52:28.149726 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 15 23:52:28.149763 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:28.151670 ignition[785]: kargs: kargs passed May 15 23:52:28.151736 ignition[785]: Ignition finished successfully May 15 23:52:28.157613 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:52:28.169570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:52:28.184195 ignition[794]: Ignition 2.20.0 May 15 23:52:28.184214 ignition[794]: Stage: disks May 15 23:52:28.184410 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 15 23:52:28.184422 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:28.185696 ignition[794]: disks: disks passed May 15 23:52:28.185766 ignition[794]: Ignition finished successfully May 15 23:52:28.192303 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:52:28.194780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:52:28.195477 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:52:28.197689 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:52:28.202205 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:52:28.202911 systemd[1]: Reached target basic.target - Basic System. May 15 23:52:28.218436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:52:28.239218 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:52:28.260436 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:52:28.277370 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:52:28.409127 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 15 23:52:28.409470 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:52:28.410776 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:52:28.430402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:52:28.433220 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:52:28.434680 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:52:28.434739 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:52:28.449942 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) May 15 23:52:28.449983 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:52:28.449995 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:52:28.450006 kernel: BTRFS info (device vda6): using free space tree May 15 23:52:28.434772 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:52:28.453719 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:52:28.443722 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:52:28.451131 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:52:28.459176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:52:28.504755 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:52:28.511390 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory May 15 23:52:28.521024 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:52:28.526221 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:52:28.657913 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:52:28.673288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:52:28.679023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:52:28.686673 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:52:28.688562 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:52:28.721981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:52:28.762719 ignition[930]: INFO : Ignition 2.20.0 May 15 23:52:28.762719 ignition[930]: INFO : Stage: mount May 15 23:52:28.773894 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:52:28.773894 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:28.773894 ignition[930]: INFO : mount: mount passed May 15 23:52:28.773894 ignition[930]: INFO : Ignition finished successfully May 15 23:52:28.780393 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:52:28.792309 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:52:28.803818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:52:28.818147 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) May 15 23:52:28.820828 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 15 23:52:28.820904 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:52:28.820920 kernel: BTRFS info (device vda6): using free space tree May 15 23:52:28.824147 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:52:28.826482 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:52:28.918027 ignition[957]: INFO : Ignition 2.20.0 May 15 23:52:28.918027 ignition[957]: INFO : Stage: files May 15 23:52:28.920167 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:52:28.920167 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:28.922815 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 15 23:52:28.924975 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:52:28.924975 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:52:28.928645 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:52:28.930661 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:52:28.932785 unknown[957]: wrote ssh authorized keys file for user: core May 15 23:52:28.934343 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:52:28.936167 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:52:28.936167 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 23:52:28.984019 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:52:29.210145 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:52:29.210145 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 23:52:29.217030 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:52:29.219813 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:52:29.224352 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 15 23:52:29.690303 systemd-networkd[779]: eth0: Gained IPv6LL May 15 23:52:29.925141 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 23:52:30.887332 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:52:30.887332 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 23:52:30.895946 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 23:52:30.905306 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:52:31.026777 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:52:31.036273 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:52:31.041445 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:52:31.041445 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 23:52:31.041445 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:52:31.041445 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:52:31.041445 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:52:31.041445 ignition[957]: INFO : files: files passed May 15 23:52:31.041445 ignition[957]: INFO : Ignition finished successfully May 15 23:52:31.065470 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:52:31.081458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:52:31.090338 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:52:31.098761 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:52:31.098956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:52:31.112131 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:52:31.122959 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:52:31.122959 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:52:31.128326 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:52:31.130785 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:52:31.134673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:52:31.147649 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:52:31.194330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:52:31.194512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:52:31.196802 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:52:31.199320 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:52:31.200936 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:52:31.211034 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:52:31.234055 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:52:31.249489 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:52:31.269845 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:52:31.274647 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:52:31.277408 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:52:31.277923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:52:31.278186 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:52:31.283823 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:52:31.284692 systemd[1]: Stopped target basic.target - Basic System. May 15 23:52:31.285139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:52:31.285735 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:52:31.286212 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:52:31.286783 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:52:31.287517 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:52:31.287943 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:52:31.288570 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:52:31.288972 systemd[1]: Stopped target swap.target - Swaps. May 15 23:52:31.289602 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:52:31.289817 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:52:31.310196 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:52:31.310806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:52:31.311514 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:52:31.318039 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:52:31.319792 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:52:31.320063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:52:31.325021 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:52:31.326640 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:52:31.328895 systemd[1]: Stopped target paths.target - Path Units. May 15 23:52:31.329604 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:52:31.334493 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:52:31.338065 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:52:31.339361 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:52:31.342977 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:52:31.343294 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:52:31.344166 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:52:31.344336 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:52:31.347636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:52:31.347887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:52:31.349641 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:52:31.349850 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:52:31.364494 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:52:31.366588 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:52:31.367622 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:52:31.367802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:52:31.369984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:52:31.370186 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:52:31.381528 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:52:31.381710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:52:31.405412 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:52:31.412970 ignition[1011]: INFO : Ignition 2.20.0 May 15 23:52:31.412970 ignition[1011]: INFO : Stage: umount May 15 23:52:31.415407 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:52:31.415407 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:52:31.420397 ignition[1011]: INFO : umount: umount passed May 15 23:52:31.420397 ignition[1011]: INFO : Ignition finished successfully May 15 23:52:31.427035 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:52:31.427235 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:52:31.428082 systemd[1]: Stopped target network.target - Network. May 15 23:52:31.433378 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:52:31.433504 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:52:31.435593 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:52:31.435672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:52:31.436352 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:52:31.436410 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:52:31.436677 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:52:31.436731 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:52:31.437251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:52:31.437672 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:52:31.450720 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:52:31.450969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:52:31.451161 systemd-networkd[779]: eth0: DHCPv6 lease lost May 15 23:52:31.453718 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:52:31.453803 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:52:31.456038 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:52:31.456258 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:52:31.457340 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:52:31.457409 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:52:31.466462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:52:31.468993 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:52:31.469147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:52:31.469764 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:52:31.469817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:52:31.470077 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:52:31.470151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:52:31.476232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:52:31.490474 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:52:31.490710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:52:31.512186 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:52:31.512495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:52:31.513885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:52:31.513961 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:52:31.520302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:52:31.521420 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:52:31.524530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:52:31.524607 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:52:31.528174 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:52:31.528247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:52:31.531651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:52:31.531736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:52:31.547424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:52:31.561010 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:52:31.561174 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:52:31.564315 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:52:31.565900 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:52:31.570763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:52:31.570846 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:52:31.575135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:52:31.576399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:52:31.579727 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:52:31.581216 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:52:32.011659 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:52:32.011882 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:52:32.016164 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:52:32.018888 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:52:32.019037 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:52:32.039445 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:52:32.050783 systemd[1]: Switching root. May 15 23:52:32.088202 systemd-journald[194]: Journal stopped May 15 23:52:33.456694 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 15 23:52:33.456791 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:52:33.456819 kernel: SELinux: policy capability open_perms=1 May 15 23:52:33.456835 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:52:33.456856 kernel: SELinux: policy capability always_check_network=0 May 15 23:52:33.456871 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:52:33.456888 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:52:33.456903 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:52:33.456918 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:52:33.456934 kernel: audit: type=1403 audit(1747353152.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:52:33.456967 systemd[1]: Successfully loaded SELinux policy in 47.322ms. May 15 23:52:33.457002 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.506ms. May 15 23:52:33.457020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:52:33.457036 systemd[1]: Detected virtualization kvm. May 15 23:52:33.457052 systemd[1]: Detected architecture x86-64. May 15 23:52:33.457067 systemd[1]: Detected first boot. May 15 23:52:33.457083 systemd[1]: Initializing machine ID from VM UUID. May 15 23:52:33.457116 zram_generator::config[1056]: No configuration found. May 15 23:52:33.457133 systemd[1]: Populated /etc with preset unit settings. May 15 23:52:33.457151 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:52:33.457174 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:52:33.457190 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:52:33.457206 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:52:33.457222 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:52:33.457239 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:52:33.457259 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:52:33.457275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:52:33.457291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:52:33.457311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:52:33.457327 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:52:33.457343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:52:33.457358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:52:33.457374 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:52:33.457390 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:52:33.457406 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:52:33.457422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:52:33.457446 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 23:52:33.457465 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:52:33.457481 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:52:33.457497 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:52:33.457516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:52:33.457532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:52:33.457549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:52:33.457565 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:52:33.457581 systemd[1]: Reached target slices.target - Slice Units. May 15 23:52:33.457601 systemd[1]: Reached target swap.target - Swaps. May 15 23:52:33.457617 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:52:33.457633 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:52:33.457649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:52:33.457665 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:52:33.457680 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:52:33.457696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:52:33.457711 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:52:33.457727 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:52:33.457746 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:52:33.457762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:33.457778 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:52:33.457794 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:52:33.457810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:52:33.457826 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:52:33.457841 systemd[1]: Reached target machines.target - Containers. May 15 23:52:33.457860 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:52:33.457883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:52:33.457902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:52:33.457921 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:52:33.457956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:52:33.457977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:52:33.458004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:52:33.459332 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:52:33.459356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:52:33.459373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:52:33.459394 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:52:33.459411 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:52:33.459427 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:52:33.459443 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:52:33.459458 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:52:33.459474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:52:33.459491 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:52:33.459508 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:52:33.459526 kernel: fuse: init (API version 7.39) May 15 23:52:33.459542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:52:33.459559 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:52:33.459578 systemd[1]: Stopped verity-setup.service. May 15 23:52:33.459594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:33.459610 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:52:33.459655 systemd-journald[1123]: Collecting audit messages is disabled. May 15 23:52:33.459686 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:52:33.459706 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:52:33.459722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:52:33.459739 systemd-journald[1123]: Journal started May 15 23:52:33.459771 systemd-journald[1123]: Runtime Journal (/run/log/journal/faa979cf8dad4b0a9c5d8479af0a1ac6) is 6.0M, max 48.4M, 42.3M free. May 15 23:52:33.179215 systemd[1]: Queued start job for default target multi-user.target. May 15 23:52:33.201022 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:52:33.201765 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:52:33.464204 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:52:33.465468 kernel: ACPI: bus type drm_connector registered May 15 23:52:33.466981 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:52:33.468844 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:52:33.470466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:52:33.472667 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:52:33.472877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:52:33.474818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:52:33.476529 kernel: loop: module loaded May 15 23:52:33.475218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:52:33.477294 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:52:33.477617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:52:33.479406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:52:33.479642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:52:33.481596 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:52:33.481908 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:52:33.485342 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:52:33.485590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:52:33.487609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:52:33.489346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:52:33.491466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:52:33.493582 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:52:33.514619 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:52:33.525424 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:52:33.528669 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:52:33.529900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:52:33.529956 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:52:33.532516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 23:52:33.535701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:52:33.543142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:52:33.545696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:52:33.566404 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:52:33.573605 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:52:33.575242 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:52:33.587685 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:52:33.591172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:52:33.594331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:52:33.603146 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:52:33.607432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:52:33.615363 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:52:33.630432 systemd-journald[1123]: Time spent on flushing to /var/log/journal/faa979cf8dad4b0a9c5d8479af0a1ac6 is 37.193ms for 952 entries. May 15 23:52:33.630432 systemd-journald[1123]: System Journal (/var/log/journal/faa979cf8dad4b0a9c5d8479af0a1ac6) is 8.0M, max 195.6M, 187.6M free. May 15 23:52:33.794723 systemd-journald[1123]: Received client request to flush runtime journal. May 15 23:52:33.794806 kernel: loop0: detected capacity change from 0 to 140992 May 15 23:52:33.794845 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:52:33.627813 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:52:33.630483 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:52:33.639334 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:52:33.646927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:52:33.656752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:52:33.672913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 23:52:33.679249 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:52:33.704328 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:52:33.747914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:52:33.752131 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:52:33.753966 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 23:52:33.765391 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 15 23:52:33.765411 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 15 23:52:33.781429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:52:33.791576 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:52:33.805165 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:52:33.819154 kernel: loop1: detected capacity change from 0 to 224512 May 15 23:52:33.848554 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:52:33.857518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:52:33.873147 kernel: loop2: detected capacity change from 0 to 138184 May 15 23:52:33.898416 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 15 23:52:33.898449 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 15 23:52:33.906197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:52:33.938144 kernel: loop3: detected capacity change from 0 to 140992 May 15 23:52:33.957136 kernel: loop4: detected capacity change from 0 to 224512 May 15 23:52:33.970449 kernel: loop5: detected capacity change from 0 to 138184 May 15 23:52:33.980011 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:52:33.980667 (sd-merge)[1197]: Merged extensions into '/usr'. May 15 23:52:33.986239 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:52:33.986268 systemd[1]: Reloading... May 15 23:52:34.056122 zram_generator::config[1223]: No configuration found. May 15 23:52:34.141688 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:52:34.207758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:52:34.269213 systemd[1]: Reloading finished in 282 ms. May 15 23:52:34.387057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:52:34.389464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:52:34.427782 systemd[1]: Starting ensure-sysext.service... May 15 23:52:34.430953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:52:34.442672 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... May 15 23:52:34.442730 systemd[1]: Reloading... May 15 23:52:34.475565 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:52:34.475988 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:52:34.477709 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:52:34.478065 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 15 23:52:34.478314 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 15 23:52:34.483739 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:52:34.483890 systemd-tmpfiles[1262]: Skipping /boot May 15 23:52:34.517969 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:52:34.518189 systemd-tmpfiles[1262]: Skipping /boot May 15 23:52:34.630599 zram_generator::config[1289]: No configuration found. May 15 23:52:34.867156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:52:34.939605 systemd[1]: Reloading finished in 496 ms. May 15 23:52:34.964976 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:52:34.978075 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:52:35.004928 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:52:35.010715 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:52:35.014738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:52:35.019428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:52:35.024081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:52:35.032525 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:52:35.040685 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:35.041700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:52:35.054549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:52:35.065614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:52:35.082423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:52:35.085014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:52:35.094304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:52:35.095678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:35.096389 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 15 23:52:35.098508 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:52:35.107628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:52:35.108442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:52:35.111968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:52:35.112226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:52:35.114550 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:52:35.115244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:52:35.120280 augenrules[1357]: No rules May 15 23:52:35.122451 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:52:35.122785 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:52:35.137532 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:52:35.141310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:52:35.144056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:35.152420 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:52:35.154309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:52:35.156266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:52:35.162452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:52:35.170307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:52:35.176241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:52:35.178420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:52:35.181127 augenrules[1372]: /sbin/augenrules: No change May 15 23:52:35.191504 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:52:35.200983 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:52:35.211234 augenrules[1411]: No rules May 15 23:52:35.202486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:52:35.203285 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:52:35.207839 systemd[1]: Finished ensure-sysext.service. May 15 23:52:35.209502 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:52:35.212354 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:52:35.215165 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:52:35.217976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:52:35.218214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:52:35.270545 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:52:35.270850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:52:35.273043 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:52:35.273295 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:52:35.279742 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 23:52:35.282261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:52:35.288131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1407) May 15 23:52:35.293304 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:52:35.296259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:52:35.296774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:52:35.297028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:52:35.299845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:52:35.325552 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:52:35.363389 systemd-resolved[1332]: Positive Trust Anchors: May 15 23:52:35.363413 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:52:35.363446 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:52:35.371811 systemd-resolved[1332]: Defaulting to hostname 'linux'. May 15 23:52:35.375393 systemd-networkd[1405]: lo: Link UP May 15 23:52:35.375753 systemd-networkd[1405]: lo: Gained carrier May 15 23:52:35.377710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:52:35.379939 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:52:35.381872 systemd-networkd[1405]: Enumeration completed May 15 23:52:35.382921 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:52:35.384557 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:52:35.384799 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:52:35.385037 systemd[1]: Reached target network.target - Network. May 15 23:52:35.389019 systemd-networkd[1405]: eth0: Link UP May 15 23:52:35.389290 systemd-networkd[1405]: eth0: Gained carrier May 15 23:52:35.389354 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:52:35.394577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:52:35.402441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:52:35.408330 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:52:35.411193 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:52:35.433603 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:52:36.394997 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:52:36.395068 systemd-timesyncd[1431]: Initial clock synchronization to Thu 2025-05-15 23:52:36.394870 UTC. May 15 23:52:36.395069 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:52:36.395119 systemd-resolved[1332]: Clock change detected. Flushing caches. May 15 23:52:36.398296 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:52:36.434377 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 23:52:36.437373 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 23:52:36.451015 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 23:52:36.451319 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 23:52:36.452164 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 23:52:36.491381 kernel: ACPI: button: Power Button [PWRF] May 15 23:52:36.504872 kernel: mousedev: PS/2 mouse device common for all mice May 15 23:52:36.534859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:52:36.634804 kernel: kvm_amd: TSC scaling supported May 15 23:52:36.634920 kernel: kvm_amd: Nested Virtualization enabled May 15 23:52:36.634935 kernel: kvm_amd: Nested Paging enabled May 15 23:52:36.635983 kernel: kvm_amd: LBR virtualization supported May 15 23:52:36.636022 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 23:52:36.636642 kernel: kvm_amd: Virtual GIF supported May 15 23:52:36.664397 kernel: EDAC MC: Ver: 3.0.0 May 15 23:52:36.702633 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:52:36.719397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:52:36.733816 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:52:36.748249 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:52:36.784703 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:52:36.786613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:52:36.788261 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:52:36.789933 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:52:36.791585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:52:36.793528 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:52:36.795096 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:52:36.797021 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:52:36.798561 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:52:36.798606 systemd[1]: Reached target paths.target - Path Units. May 15 23:52:36.799729 systemd[1]: Reached target timers.target - Timer Units. May 15 23:52:36.802328 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:52:36.805788 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:52:36.815152 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:52:36.818953 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:52:36.821322 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:52:36.823096 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:52:36.824294 systemd[1]: Reached target basic.target - Basic System. May 15 23:52:36.825439 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:52:36.825479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:52:36.827152 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:52:36.830176 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:52:36.835530 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:52:36.840966 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:52:36.843006 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:52:36.844457 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:52:36.847449 jq[1460]: false May 15 23:52:36.847652 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:52:36.852332 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:52:36.857609 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:52:36.870539 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:52:36.872238 extend-filesystems[1461]: Found loop3 May 15 23:52:36.873566 extend-filesystems[1461]: Found loop4 May 15 23:52:36.873566 extend-filesystems[1461]: Found loop5 May 15 23:52:36.873566 extend-filesystems[1461]: Found sr0 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda May 15 23:52:36.873566 extend-filesystems[1461]: Found vda1 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda2 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda3 May 15 23:52:36.873566 extend-filesystems[1461]: Found usr May 15 23:52:36.873566 extend-filesystems[1461]: Found vda4 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda6 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda7 May 15 23:52:36.873566 extend-filesystems[1461]: Found vda9 May 15 23:52:36.873566 extend-filesystems[1461]: Checking size of /dev/vda9 May 15 23:52:36.891095 extend-filesystems[1461]: Resized partition /dev/vda9 May 15 23:52:36.892755 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) May 15 23:52:36.933642 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:52:36.933677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1374) May 15 23:52:36.899196 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:52:36.910973 dbus-daemon[1459]: [system] SELinux support is enabled May 15 23:52:36.902534 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:52:36.903145 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:52:36.965487 update_engine[1478]: I20250515 23:52:36.962167 1478 main.cc:92] Flatcar Update Engine starting May 15 23:52:36.965487 update_engine[1478]: I20250515 23:52:36.963881 1478 update_check_scheduler.cc:74] Next update check in 8m8s May 15 23:52:36.906734 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:52:36.914491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:52:36.966197 jq[1480]: true May 15 23:52:36.917312 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:52:36.921404 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:52:36.930226 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:52:36.930646 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:52:36.931242 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:52:36.931541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:52:36.940171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:52:36.940830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:52:36.972271 jq[1491]: true May 15 23:52:36.980305 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:52:36.981621 systemd[1]: Started update-engine.service - Update Engine. May 15 23:52:36.983222 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:52:36.983269 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:52:36.997759 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:52:36.997796 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:52:37.001090 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:52:37.019586 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:52:37.033494 tar[1484]: linux-amd64/LICENSE May 15 23:52:37.066804 systemd-logind[1476]: Watching system buttons on /dev/input/event1 (Power Button) May 15 23:52:37.067704 tar[1484]: linux-amd64/helm May 15 23:52:37.066842 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 23:52:37.070032 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:52:37.072392 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:52:37.072392 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:52:37.072392 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:52:37.086418 extend-filesystems[1461]: Resized filesystem in /dev/vda9 May 15 23:52:37.072675 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:52:37.072983 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:52:37.073442 systemd-logind[1476]: New seat seat0. May 15 23:52:37.082028 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:52:37.100472 bash[1510]: Updated "/home/core/.ssh/authorized_keys" May 15 23:52:37.108473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:52:37.112911 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:52:37.223794 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:52:37.291576 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:52:37.332965 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:52:37.344594 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:52:37.344953 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:52:37.359161 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:52:37.411876 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:52:37.433031 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:52:37.444092 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 23:52:37.446368 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:52:37.630680 containerd[1492]: time="2025-05-15T23:52:37.630477876Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:52:37.665389 containerd[1492]: time="2025-05-15T23:52:37.664966868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.667872 containerd[1492]: time="2025-05-15T23:52:37.667823005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:52:37.667872 containerd[1492]: time="2025-05-15T23:52:37.667858892Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:52:37.667998 containerd[1492]: time="2025-05-15T23:52:37.667881044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:52:37.668253 containerd[1492]: time="2025-05-15T23:52:37.668214690Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:52:37.668311 containerd[1492]: time="2025-05-15T23:52:37.668248443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.668431 containerd[1492]: time="2025-05-15T23:52:37.668397112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:52:37.668431 containerd[1492]: time="2025-05-15T23:52:37.668421047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.668706 containerd[1492]: time="2025-05-15T23:52:37.668672037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:52:37.668706 containerd[1492]: time="2025-05-15T23:52:37.668695251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.668790 containerd[1492]: time="2025-05-15T23:52:37.668714577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:52:37.668790 containerd[1492]: time="2025-05-15T23:52:37.668728343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.668909 containerd[1492]: time="2025-05-15T23:52:37.668877172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.669248 containerd[1492]: time="2025-05-15T23:52:37.669206039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:52:37.669442 containerd[1492]: time="2025-05-15T23:52:37.669387239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:52:37.669442 containerd[1492]: time="2025-05-15T23:52:37.669418187Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:52:37.669647 containerd[1492]: time="2025-05-15T23:52:37.669613373Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:52:37.669746 containerd[1492]: time="2025-05-15T23:52:37.669708351Z" level=info msg="metadata content store policy set" policy=shared May 15 23:52:37.687707 systemd-networkd[1405]: eth0: Gained IPv6LL May 15 23:52:37.692093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:52:37.721130 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:52:37.740891 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:52:37.771681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:52:37.774835 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:52:37.826591 tar[1484]: linux-amd64/README.md May 15 23:52:37.850182 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:52:37.880609 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:52:37.880956 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:52:37.913885 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:52:37.932419 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:52:38.054975 containerd[1492]: time="2025-05-15T23:52:38.054812476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:52:38.054975 containerd[1492]: time="2025-05-15T23:52:38.054929847Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:52:38.054975 containerd[1492]: time="2025-05-15T23:52:38.054951197Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:52:38.054975 containerd[1492]: time="2025-05-15T23:52:38.054975332Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:52:38.054975 containerd[1492]: time="2025-05-15T23:52:38.054991572Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:52:38.055308 containerd[1492]: time="2025-05-15T23:52:38.055242263Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:52:38.055594 containerd[1492]: time="2025-05-15T23:52:38.055561491Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055748863Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055780402Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055798646Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055815969Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055833742Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055849171Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055873026Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055891981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055915285Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055930453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055946323Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.055977722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056007 containerd[1492]: time="2025-05-15T23:52:38.056009822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056031934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056049246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056065587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056084242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056112555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056131340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056161236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056183228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056202013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056219165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056236748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056258058Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056287183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056311418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056422 containerd[1492]: time="2025-05-15T23:52:38.056328540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056445379Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056475406Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056490544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056507756Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056521773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056540838Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056555185Z" level=info msg="NRI interface is disabled by configuration." May 15 23:52:38.056843 containerd[1492]: time="2025-05-15T23:52:38.056569933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:52:38.057022 containerd[1492]: time="2025-05-15T23:52:38.056953352Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:52:38.057190 containerd[1492]: time="2025-05-15T23:52:38.057032190Z" level=info msg="Connect containerd service" May 15 23:52:38.057190 containerd[1492]: time="2025-05-15T23:52:38.057069019Z" level=info msg="using legacy CRI server" May 15 23:52:38.057190 containerd[1492]: time="2025-05-15T23:52:38.057085600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:52:38.057339 containerd[1492]: time="2025-05-15T23:52:38.057283331Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:52:38.058273 containerd[1492]: time="2025-05-15T23:52:38.058208476Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:52:38.058499 containerd[1492]: time="2025-05-15T23:52:38.058433589Z" level=info msg="Start subscribing containerd event" May 15 23:52:38.058557 containerd[1492]: time="2025-05-15T23:52:38.058540499Z" level=info msg="Start recovering state" May 15 23:52:38.058752 containerd[1492]: time="2025-05-15T23:52:38.058731387Z" level=info msg="Start event monitor" May 15 23:52:38.058804 containerd[1492]: time="2025-05-15T23:52:38.058783295Z" level=info msg="Start snapshots syncer" May 15 23:52:38.058833 containerd[1492]: time="2025-05-15T23:52:38.058799705Z" level=info msg="Start cni network conf syncer for default" May 15 23:52:38.058833 containerd[1492]: time="2025-05-15T23:52:38.058813892Z" level=info msg="Start streaming server" May 15 23:52:38.058980 containerd[1492]: time="2025-05-15T23:52:38.058734493Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:52:38.059031 containerd[1492]: time="2025-05-15T23:52:38.059004770Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:52:38.059147 containerd[1492]: time="2025-05-15T23:52:38.059113284Z" level=info msg="containerd successfully booted in 0.431858s" May 15 23:52:38.059497 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:52:39.992990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:52:40.011731 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:52:40.012668 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:52:40.014247 systemd[1]: Startup finished in 1.311s (kernel) + 8.663s (initrd) + 6.561s (userspace) = 16.536s. May 15 23:52:40.085117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:52:40.094933 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:42856.service - OpenSSH per-connection server daemon (10.0.0.1:42856). May 15 23:52:40.164856 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 42856 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:52:40.169998 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:40.184987 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:52:40.193774 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:52:40.197449 systemd-logind[1476]: New session 1 of user core. May 15 23:52:40.233181 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:52:40.245095 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:52:40.251161 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:52:40.454989 systemd[1589]: Queued start job for default target default.target. May 15 23:52:40.482693 systemd[1589]: Created slice app.slice - User Application Slice. May 15 23:52:40.482750 systemd[1589]: Reached target paths.target - Paths. May 15 23:52:40.482773 systemd[1589]: Reached target timers.target - Timers. May 15 23:52:40.485183 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:52:40.510061 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:52:40.510282 systemd[1589]: Reached target sockets.target - Sockets. May 15 23:52:40.510310 systemd[1589]: Reached target basic.target - Basic System. May 15 23:52:40.510415 systemd[1589]: Reached target default.target - Main User Target. May 15 23:52:40.510466 systemd[1589]: Startup finished in 249ms. May 15 23:52:40.510704 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:52:40.512836 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:52:40.583718 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:42864.service - OpenSSH per-connection server daemon (10.0.0.1:42864). May 15 23:52:40.655148 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 42864 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:52:40.658557 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:40.666550 systemd-logind[1476]: New session 2 of user core. May 15 23:52:40.679707 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:52:40.745840 sshd[1602]: Connection closed by 10.0.0.1 port 42864 May 15 23:52:40.746411 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 15 23:52:40.759131 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:42864.service: Deactivated successfully. May 15 23:52:40.761331 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:52:40.763849 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. May 15 23:52:40.771794 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:42868.service - OpenSSH per-connection server daemon (10.0.0.1:42868). May 15 23:52:40.773980 systemd-logind[1476]: Removed session 2. May 15 23:52:40.837445 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42868 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:52:40.839874 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:40.846094 systemd-logind[1476]: New session 3 of user core. May 15 23:52:40.856653 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:52:40.863343 kubelet[1573]: E0515 23:52:40.863284 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:52:40.869157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:52:40.869462 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:52:40.869884 systemd[1]: kubelet.service: Consumed 2.466s CPU time. May 15 23:52:40.912301 sshd[1610]: Connection closed by 10.0.0.1 port 42868 May 15 23:52:40.912972 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 15 23:52:40.931793 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:42868.service: Deactivated successfully. May 15 23:52:40.934894 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:52:40.937805 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. May 15 23:52:40.952900 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:42882.service - OpenSSH per-connection server daemon (10.0.0.1:42882). May 15 23:52:40.954320 systemd-logind[1476]: Removed session 3. May 15 23:52:40.992647 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 42882 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:52:40.995225 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:41.001294 systemd-logind[1476]: New session 4 of user core. May 15 23:52:41.010670 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:52:41.068980 sshd[1618]: Connection closed by 10.0.0.1 port 42882 May 15 23:52:41.069585 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 15 23:52:41.085386 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:42882.service: Deactivated successfully. May 15 23:52:41.087682 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:52:41.089340 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. May 15 23:52:41.099875 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:42898.service - OpenSSH per-connection server daemon (10.0.0.1:42898). May 15 23:52:41.101070 systemd-logind[1476]: Removed session 4. May 15 23:52:41.134397 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 42898 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:52:41.136536 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:52:41.141970 systemd-logind[1476]: New session 5 of user core. May 15 23:52:41.152594 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:52:41.214969 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:52:41.215364 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:52:41.711684 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:52:41.711825 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:52:42.379664 dockerd[1646]: time="2025-05-15T23:52:42.379561309Z" level=info msg="Starting up" May 15 23:52:43.207583 dockerd[1646]: time="2025-05-15T23:52:43.207501699Z" level=info msg="Loading containers: start." May 15 23:52:43.455401 kernel: Initializing XFRM netlink socket May 15 23:52:43.567796 systemd-networkd[1405]: docker0: Link UP May 15 23:52:43.640614 dockerd[1646]: time="2025-05-15T23:52:43.640523243Z" level=info msg="Loading containers: done." May 15 23:52:43.663852 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3193950109-merged.mount: Deactivated successfully. May 15 23:52:43.666292 dockerd[1646]: time="2025-05-15T23:52:43.666230924Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:52:43.666416 dockerd[1646]: time="2025-05-15T23:52:43.666389571Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 23:52:43.666566 dockerd[1646]: time="2025-05-15T23:52:43.666539392Z" level=info msg="Daemon has completed initialization" May 15 23:52:43.732248 dockerd[1646]: time="2025-05-15T23:52:43.732102850Z" level=info msg="API listen on /run/docker.sock" May 15 23:52:43.732334 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:52:44.889587 containerd[1492]: time="2025-05-15T23:52:44.889538339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 15 23:52:46.549955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776389699.mount: Deactivated successfully. May 15 23:52:47.635579 containerd[1492]: time="2025-05-15T23:52:47.635499206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:47.636561 containerd[1492]: time="2025-05-15T23:52:47.636513479Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 15 23:52:47.638990 containerd[1492]: time="2025-05-15T23:52:47.638921496Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:47.642989 containerd[1492]: time="2025-05-15T23:52:47.642956604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:47.644380 containerd[1492]: time="2025-05-15T23:52:47.644288913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.754702965s" May 15 23:52:47.644437 containerd[1492]: time="2025-05-15T23:52:47.644390354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 15 23:52:47.645132 containerd[1492]: time="2025-05-15T23:52:47.645105325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 15 23:52:49.972976 containerd[1492]: time="2025-05-15T23:52:49.972876187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:49.979761 containerd[1492]: time="2025-05-15T23:52:49.979687814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 15 23:52:50.059206 containerd[1492]: time="2025-05-15T23:52:50.059125667Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:50.120851 containerd[1492]: time="2025-05-15T23:52:50.120722385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:50.122031 containerd[1492]: time="2025-05-15T23:52:50.121879385Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.476741789s" May 15 23:52:50.122031 containerd[1492]: time="2025-05-15T23:52:50.121930581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 15 23:52:50.122561 containerd[1492]: time="2025-05-15T23:52:50.122535475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 15 23:52:50.909296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:52:50.921801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:52:51.139612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:52:51.146768 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:52:51.478578 kubelet[1909]: E0515 23:52:51.478500 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:52:51.485054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:52:51.485336 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:52:52.653443 containerd[1492]: time="2025-05-15T23:52:52.653379055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:52.654232 containerd[1492]: time="2025-05-15T23:52:52.654183113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 15 23:52:52.655704 containerd[1492]: time="2025-05-15T23:52:52.655658500Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:52.658791 containerd[1492]: time="2025-05-15T23:52:52.658757122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:52.660130 containerd[1492]: time="2025-05-15T23:52:52.660093860Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.537526765s" May 15 23:52:52.660180 containerd[1492]: time="2025-05-15T23:52:52.660131039Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 15 23:52:52.660743 containerd[1492]: time="2025-05-15T23:52:52.660704926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 15 23:52:57.745785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640097688.mount: Deactivated successfully. May 15 23:52:58.223416 containerd[1492]: time="2025-05-15T23:52:58.223327967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:58.224552 containerd[1492]: time="2025-05-15T23:52:58.224503311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 15 23:52:58.226081 containerd[1492]: time="2025-05-15T23:52:58.226045534Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:58.228360 containerd[1492]: time="2025-05-15T23:52:58.228302778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:52:58.229102 containerd[1492]: time="2025-05-15T23:52:58.229064888Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 5.568315899s" May 15 23:52:58.229102 containerd[1492]: time="2025-05-15T23:52:58.229098681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 15 23:52:58.229734 containerd[1492]: time="2025-05-15T23:52:58.229700560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:52:58.914104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341525883.mount: Deactivated successfully. May 15 23:53:00.864257 containerd[1492]: time="2025-05-15T23:53:00.864149653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:00.865029 containerd[1492]: time="2025-05-15T23:53:00.864974861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 23:53:00.866662 containerd[1492]: time="2025-05-15T23:53:00.866608776Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:00.870292 containerd[1492]: time="2025-05-15T23:53:00.870218336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:00.871544 containerd[1492]: time="2025-05-15T23:53:00.871498658Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.641764435s" May 15 23:53:00.871544 containerd[1492]: time="2025-05-15T23:53:00.871540657Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 23:53:00.872768 containerd[1492]: time="2025-05-15T23:53:00.872723736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:53:01.422449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527275156.mount: Deactivated successfully. May 15 23:53:01.431198 containerd[1492]: time="2025-05-15T23:53:01.431124556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:01.432495 containerd[1492]: time="2025-05-15T23:53:01.432431468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 23:53:01.434408 containerd[1492]: time="2025-05-15T23:53:01.434362329Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:01.437036 containerd[1492]: time="2025-05-15T23:53:01.436982795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:01.437842 containerd[1492]: time="2025-05-15T23:53:01.437785681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 565.028352ms" May 15 23:53:01.437842 containerd[1492]: time="2025-05-15T23:53:01.437831957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 23:53:01.438760 containerd[1492]: time="2025-05-15T23:53:01.438546137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 23:53:01.659289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:53:01.677770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:53:01.872169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:01.879509 (kubelet)[1993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:53:01.975075 kubelet[1993]: E0515 23:53:01.974853 1993 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:53:01.980600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:53:01.980878 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:53:02.841861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456013811.mount: Deactivated successfully. May 15 23:53:06.758305 containerd[1492]: time="2025-05-15T23:53:06.758178106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:06.759880 containerd[1492]: time="2025-05-15T23:53:06.759670395Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 23:53:06.762164 containerd[1492]: time="2025-05-15T23:53:06.762088831Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:06.868598 containerd[1492]: time="2025-05-15T23:53:06.868505869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:06.870386 containerd[1492]: time="2025-05-15T23:53:06.870190119Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.431580893s" May 15 23:53:06.870641 containerd[1492]: time="2025-05-15T23:53:06.870504148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 23:53:10.469652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:10.486397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:53:10.537041 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-5.scope)... May 15 23:53:10.537222 systemd[1]: Reloading... May 15 23:53:10.662399 zram_generator::config[2124]: No configuration found. May 15 23:53:11.132506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:53:11.233694 systemd[1]: Reloading finished in 695 ms. May 15 23:53:11.287967 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:53:11.288296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:11.291527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:53:11.492387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:11.507149 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:53:11.612565 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:53:11.612565 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:53:11.612565 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:53:11.612565 kubelet[2173]: I0515 23:53:11.611099 2173 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:53:13.295638 kubelet[2173]: I0515 23:53:13.295572 2173 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:53:13.295638 kubelet[2173]: I0515 23:53:13.295615 2173 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:53:13.296236 kubelet[2173]: I0515 23:53:13.295955 2173 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:53:13.590910 kubelet[2173]: E0515 23:53:13.590747 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:13.591694 kubelet[2173]: I0515 23:53:13.591641 2173 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:53:13.609334 kubelet[2173]: E0515 23:53:13.609239 2173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:53:13.609334 kubelet[2173]: I0515 23:53:13.609325 2173 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:53:13.616819 kubelet[2173]: I0515 23:53:13.616415 2173 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:53:13.634814 kubelet[2173]: I0515 23:53:13.634686 2173 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:53:13.636019 kubelet[2173]: I0515 23:53:13.634811 2173 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:53:13.636019 kubelet[2173]: I0515 23:53:13.635704 2173 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:53:13.636019 kubelet[2173]: I0515 23:53:13.635723 2173 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:53:13.636019 kubelet[2173]: I0515 23:53:13.635951 2173 state_mem.go:36] "Initialized new in-memory state store" May 15 23:53:13.647415 kubelet[2173]: I0515 23:53:13.646716 2173 kubelet.go:446] "Attempting to sync node with API server" May 15 23:53:13.647415 kubelet[2173]: I0515 23:53:13.647167 2173 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:53:13.647415 kubelet[2173]: I0515 23:53:13.647217 2173 kubelet.go:352] "Adding apiserver pod source" May 15 23:53:13.647415 kubelet[2173]: I0515 23:53:13.647242 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:53:13.722445 kubelet[2173]: W0515 23:53:13.722199 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:13.722445 kubelet[2173]: E0515 23:53:13.722316 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:13.722705 kubelet[2173]: W0515 23:53:13.722544 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:13.723043 kubelet[2173]: E0515 23:53:13.722817 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:13.758528 kubelet[2173]: I0515 23:53:13.757679 2173 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:53:13.758528 kubelet[2173]: I0515 23:53:13.758417 2173 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:53:13.763684 kubelet[2173]: W0515 23:53:13.761907 2173 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:53:13.773379 kubelet[2173]: I0515 23:53:13.772015 2173 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:53:13.773379 kubelet[2173]: I0515 23:53:13.772105 2173 server.go:1287] "Started kubelet" May 15 23:53:13.773379 kubelet[2173]: I0515 23:53:13.772536 2173 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:53:13.779448 kubelet[2173]: I0515 23:53:13.779206 2173 server.go:479] "Adding debug handlers to kubelet server" May 15 23:53:13.784038 kubelet[2173]: I0515 23:53:13.781962 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:53:13.789422 kubelet[2173]: I0515 23:53:13.785191 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:53:13.789422 kubelet[2173]: I0515 23:53:13.785611 2173 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:53:13.789422 kubelet[2173]: I0515 23:53:13.785871 2173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:53:13.789422 kubelet[2173]: E0515 23:53:13.788190 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:13.789422 kubelet[2173]: I0515 23:53:13.788234 2173 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:53:13.790710 kubelet[2173]: W0515 23:53:13.790520 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:13.790710 kubelet[2173]: E0515 23:53:13.790583 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:13.791208 kubelet[2173]: I0515 23:53:13.791031 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:53:13.792634 kubelet[2173]: I0515 23:53:13.791459 2173 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:53:13.795478 kubelet[2173]: E0515 23:53:13.791808 2173 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:53:13.795478 kubelet[2173]: I0515 23:53:13.794931 2173 reconciler.go:26] "Reconciler: start to sync state" May 15 23:53:13.800517 kubelet[2173]: I0515 23:53:13.797157 2173 factory.go:221] Registration of the containerd container factory successfully May 15 23:53:13.800517 kubelet[2173]: I0515 23:53:13.797182 2173 factory.go:221] Registration of the systemd container factory successfully May 15 23:53:13.809534 kubelet[2173]: E0515 23:53:13.806177 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd878cbed9598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:53:13.772062104 +0000 UTC m=+2.258624293,LastTimestamp:2025-05-15 23:53:13.772062104 +0000 UTC m=+2.258624293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:53:13.815708 kubelet[2173]: E0515 23:53:13.814573 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" May 15 23:53:13.837962 kubelet[2173]: I0515 23:53:13.835239 2173 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:53:13.837962 kubelet[2173]: I0515 23:53:13.835267 2173 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:53:13.837962 kubelet[2173]: I0515 23:53:13.835291 2173 state_mem.go:36] "Initialized new in-memory state store" May 15 23:53:13.837962 kubelet[2173]: I0515 23:53:13.836342 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:53:13.840053 kubelet[2173]: I0515 23:53:13.839535 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:53:13.840053 kubelet[2173]: I0515 23:53:13.839581 2173 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:53:13.840053 kubelet[2173]: I0515 23:53:13.839622 2173 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:53:13.840053 kubelet[2173]: I0515 23:53:13.839636 2173 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:53:13.840053 kubelet[2173]: E0515 23:53:13.839736 2173 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:53:13.848484 kubelet[2173]: W0515 23:53:13.845021 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:13.848484 kubelet[2173]: E0515 23:53:13.845083 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:13.888583 kubelet[2173]: E0515 23:53:13.888415 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:13.940798 kubelet[2173]: E0515 23:53:13.940678 2173 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:53:13.989942 kubelet[2173]: E0515 23:53:13.989178 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:14.016309 kubelet[2173]: E0515 23:53:14.016181 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" May 15 23:53:14.090903 kubelet[2173]: E0515 23:53:14.090717 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:14.096381 kubelet[2173]: I0515 23:53:14.095914 2173 policy_none.go:49] "None policy: Start" May 15 23:53:14.096381 kubelet[2173]: I0515 23:53:14.095972 2173 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:53:14.096381 kubelet[2173]: I0515 23:53:14.095991 2173 state_mem.go:35] "Initializing new in-memory state store" May 15 23:53:14.168327 kubelet[2173]: E0515 23:53:14.143958 2173 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:53:14.198757 kubelet[2173]: E0515 23:53:14.196606 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:14.212800 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:53:14.265981 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:53:14.283935 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:53:14.285925 kubelet[2173]: I0515 23:53:14.285877 2173 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:53:14.286228 kubelet[2173]: I0515 23:53:14.286198 2173 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:53:14.286414 kubelet[2173]: I0515 23:53:14.286219 2173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:53:14.286947 kubelet[2173]: I0515 23:53:14.286542 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:53:14.287649 kubelet[2173]: E0515 23:53:14.287615 2173 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:53:14.287734 kubelet[2173]: E0515 23:53:14.287674 2173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:53:14.388048 kubelet[2173]: I0515 23:53:14.387988 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:14.388664 kubelet[2173]: E0515 23:53:14.388541 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:14.416895 kubelet[2173]: E0515 23:53:14.416809 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" May 15 23:53:14.570983 systemd[1]: Created slice kubepods-burstable-pod9b1c8e21c9c4df256b84536412787f07.slice - libcontainer container kubepods-burstable-pod9b1c8e21c9c4df256b84536412787f07.slice. May 15 23:53:14.587859 kubelet[2173]: E0515 23:53:14.586826 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:14.591044 kubelet[2173]: I0515 23:53:14.591017 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:14.591602 kubelet[2173]: E0515 23:53:14.591489 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:14.592410 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 15 23:53:14.596026 kubelet[2173]: E0515 23:53:14.595526 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:14.599258 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 15 23:53:14.604911 kubelet[2173]: E0515 23:53:14.604851 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:14.609769 kubelet[2173]: I0515 23:53:14.609116 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:53:14.609769 kubelet[2173]: I0515 23:53:14.609185 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:14.609769 kubelet[2173]: I0515 23:53:14.609216 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:14.609769 kubelet[2173]: I0515 23:53:14.609239 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:14.609769 kubelet[2173]: I0515 23:53:14.609275 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:14.610098 kubelet[2173]: I0515 23:53:14.609300 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:14.610098 kubelet[2173]: I0515 23:53:14.609333 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:14.610098 kubelet[2173]: I0515 23:53:14.609374 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:14.610098 kubelet[2173]: I0515 23:53:14.609397 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:14.839483 kubelet[2173]: W0515 23:53:14.839162 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:14.839483 kubelet[2173]: E0515 23:53:14.839270 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:14.889024 kubelet[2173]: E0515 23:53:14.888937 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:14.889933 containerd[1492]: time="2025-05-15T23:53:14.889868991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b1c8e21c9c4df256b84536412787f07,Namespace:kube-system,Attempt:0,}" May 15 23:53:14.896569 kubelet[2173]: E0515 23:53:14.896523 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:14.897227 containerd[1492]: time="2025-05-15T23:53:14.897184062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 15 23:53:14.905601 kubelet[2173]: E0515 23:53:14.905554 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:14.906211 containerd[1492]: time="2025-05-15T23:53:14.906170025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 15 23:53:14.994078 kubelet[2173]: I0515 23:53:14.994022 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:14.995136 kubelet[2173]: E0515 23:53:14.995054 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:15.104597 kubelet[2173]: W0515 23:53:15.104266 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:15.104597 kubelet[2173]: E0515 23:53:15.104380 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:15.193484 kubelet[2173]: W0515 23:53:15.193134 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:15.193484 kubelet[2173]: E0515 23:53:15.193247 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:15.213544 kubelet[2173]: W0515 23:53:15.213237 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:15.213544 kubelet[2173]: E0515 23:53:15.213448 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:15.218419 kubelet[2173]: E0515 23:53:15.218297 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" May 15 23:53:15.738642 kubelet[2173]: E0515 23:53:15.738097 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:15.797772 kubelet[2173]: I0515 23:53:15.797670 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:15.798513 kubelet[2173]: E0515 23:53:15.798217 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:16.819500 kubelet[2173]: E0515 23:53:16.819418 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="3.2s" May 15 23:53:17.403456 kubelet[2173]: I0515 23:53:17.402245 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:17.403456 kubelet[2173]: E0515 23:53:17.402753 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:17.562093 kubelet[2173]: W0515 23:53:17.561878 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:17.562093 kubelet[2173]: E0515 23:53:17.561946 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:17.672597 kubelet[2173]: W0515 23:53:17.672491 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:17.672597 kubelet[2173]: E0515 23:53:17.672558 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:17.750791 kubelet[2173]: W0515 23:53:17.750656 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:17.750791 kubelet[2173]: E0515 23:53:17.750731 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:17.885895 kubelet[2173]: W0515 23:53:17.885804 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:17.885895 kubelet[2173]: E0515 23:53:17.885865 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:19.967085 kubelet[2173]: E0515 23:53:19.967016 2173 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:20.020294 kubelet[2173]: E0515 23:53:20.020232 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="6.4s" May 15 23:53:20.490893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154746805.mount: Deactivated successfully. May 15 23:53:20.605161 kubelet[2173]: I0515 23:53:20.605103 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:20.605617 kubelet[2173]: E0515 23:53:20.605566 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" May 15 23:53:21.326091 kubelet[2173]: W0515 23:53:21.326013 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:21.326643 kubelet[2173]: E0515 23:53:21.326079 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:21.530062 kubelet[2173]: W0515 23:53:21.529996 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:21.530062 kubelet[2173]: E0515 23:53:21.530054 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:21.670855 containerd[1492]: time="2025-05-15T23:53:21.670786461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:53:21.723309 containerd[1492]: time="2025-05-15T23:53:21.723133432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:53:21.823965 containerd[1492]: time="2025-05-15T23:53:21.823876513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 23:53:21.865041 containerd[1492]: time="2025-05-15T23:53:21.864919855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:53:21.936736 containerd[1492]: time="2025-05-15T23:53:21.936537759Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:53:21.976770 containerd[1492]: time="2025-05-15T23:53:21.976694216Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:53:22.042721 containerd[1492]: time="2025-05-15T23:53:22.042615329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:53:22.111042 containerd[1492]: time="2025-05-15T23:53:22.110959214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:53:22.112091 containerd[1492]: time="2025-05-15T23:53:22.112031079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.21471704s" May 15 23:53:22.114079 containerd[1492]: time="2025-05-15T23:53:22.114011701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.223949822s" May 15 23:53:22.187115 containerd[1492]: time="2025-05-15T23:53:22.186907339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.280627525s" May 15 23:53:22.486289 update_engine[1478]: I20250515 23:53:22.486032 1478 update_attempter.cc:509] Updating boot flags... May 15 23:53:22.590405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2219) May 15 23:53:22.652401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2219) May 15 23:53:23.012542 kubelet[2173]: W0515 23:53:23.012465 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:23.012542 kubelet[2173]: E0515 23:53:23.012543 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:23.185025 kubelet[2173]: E0515 23:53:23.184870 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd878cbed9598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:53:13.772062104 +0000 UTC m=+2.258624293,LastTimestamp:2025-05-15 23:53:13.772062104 +0000 UTC m=+2.258624293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:53:23.683614 containerd[1492]: time="2025-05-15T23:53:23.683442148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:53:23.683614 containerd[1492]: time="2025-05-15T23:53:23.683548199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:53:23.683614 containerd[1492]: time="2025-05-15T23:53:23.683563268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.684138 containerd[1492]: time="2025-05-15T23:53:23.683674349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.736616 systemd[1]: Started cri-containerd-19c0fce386c9dad6d135aaf4164756ecfe1cc649ff6de88fcbb986228cefd1c0.scope - libcontainer container 19c0fce386c9dad6d135aaf4164756ecfe1cc649ff6de88fcbb986228cefd1c0. May 15 23:53:23.780574 containerd[1492]: time="2025-05-15T23:53:23.780499366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b1c8e21c9c4df256b84536412787f07,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c0fce386c9dad6d135aaf4164756ecfe1cc649ff6de88fcbb986228cefd1c0\"" May 15 23:53:23.781984 kubelet[2173]: E0515 23:53:23.781954 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:23.784240 containerd[1492]: time="2025-05-15T23:53:23.784192805Z" level=info msg="CreateContainer within sandbox \"19c0fce386c9dad6d135aaf4164756ecfe1cc649ff6de88fcbb986228cefd1c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:53:23.898857 containerd[1492]: time="2025-05-15T23:53:23.898700552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:53:23.898857 containerd[1492]: time="2025-05-15T23:53:23.898816854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:53:23.898857 containerd[1492]: time="2025-05-15T23:53:23.898841570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.899077 containerd[1492]: time="2025-05-15T23:53:23.898986125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.921078 containerd[1492]: time="2025-05-15T23:53:23.920928760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:53:23.921691 containerd[1492]: time="2025-05-15T23:53:23.921093794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:53:23.921916 containerd[1492]: time="2025-05-15T23:53:23.921732977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.921916 containerd[1492]: time="2025-05-15T23:53:23.921837004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:23.930290 containerd[1492]: time="2025-05-15T23:53:23.930188104Z" level=info msg="CreateContainer within sandbox \"19c0fce386c9dad6d135aaf4164756ecfe1cc649ff6de88fcbb986228cefd1c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f6c0312be6aff32617415a7ec2c72b3d2457ce151427c78c001495d2cd5f4df\"" May 15 23:53:23.932342 containerd[1492]: time="2025-05-15T23:53:23.932264615Z" level=info msg="StartContainer for \"5f6c0312be6aff32617415a7ec2c72b3d2457ce151427c78c001495d2cd5f4df\"" May 15 23:53:23.939604 systemd[1]: Started cri-containerd-68781393192440dc11aef641a5c95f7a7a92979340917d9b2431a4788829c6b7.scope - libcontainer container 68781393192440dc11aef641a5c95f7a7a92979340917d9b2431a4788829c6b7. May 15 23:53:23.954412 systemd[1]: Started cri-containerd-395e810bd4211aa8968c8d67e884829346c5750b39516102d3ab798993d9d422.scope - libcontainer container 395e810bd4211aa8968c8d67e884829346c5750b39516102d3ab798993d9d422. May 15 23:53:24.008067 systemd[1]: Started cri-containerd-5f6c0312be6aff32617415a7ec2c72b3d2457ce151427c78c001495d2cd5f4df.scope - libcontainer container 5f6c0312be6aff32617415a7ec2c72b3d2457ce151427c78c001495d2cd5f4df. May 15 23:53:24.022116 containerd[1492]: time="2025-05-15T23:53:24.022062515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"68781393192440dc11aef641a5c95f7a7a92979340917d9b2431a4788829c6b7\"" May 15 23:53:24.024418 kubelet[2173]: E0515 23:53:24.023962 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:24.029870 containerd[1492]: time="2025-05-15T23:53:24.028697479Z" level=info msg="CreateContainer within sandbox \"68781393192440dc11aef641a5c95f7a7a92979340917d9b2431a4788829c6b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:53:24.035617 containerd[1492]: time="2025-05-15T23:53:24.035558100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"395e810bd4211aa8968c8d67e884829346c5750b39516102d3ab798993d9d422\"" May 15 23:53:24.036944 kubelet[2173]: E0515 23:53:24.036887 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:24.039324 containerd[1492]: time="2025-05-15T23:53:24.039063919Z" level=info msg="CreateContainer within sandbox \"395e810bd4211aa8968c8d67e884829346c5750b39516102d3ab798993d9d422\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:53:24.087473 containerd[1492]: time="2025-05-15T23:53:24.087381535Z" level=info msg="StartContainer for \"5f6c0312be6aff32617415a7ec2c72b3d2457ce151427c78c001495d2cd5f4df\" returns successfully" May 15 23:53:24.109926 containerd[1492]: time="2025-05-15T23:53:24.109830430Z" level=info msg="CreateContainer within sandbox \"68781393192440dc11aef641a5c95f7a7a92979340917d9b2431a4788829c6b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"afd035955a21a2817394da032236a80da9dfcbb567d9d00ff65c41192a5d746f\"" May 15 23:53:24.112327 containerd[1492]: time="2025-05-15T23:53:24.110616471Z" level=info msg="StartContainer for \"afd035955a21a2817394da032236a80da9dfcbb567d9d00ff65c41192a5d746f\"" May 15 23:53:24.112454 kubelet[2173]: W0515 23:53:24.112176 2173 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused May 15 23:53:24.112454 kubelet[2173]: E0515 23:53:24.112260 2173 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" May 15 23:53:24.116306 containerd[1492]: time="2025-05-15T23:53:24.116238313Z" level=info msg="CreateContainer within sandbox \"395e810bd4211aa8968c8d67e884829346c5750b39516102d3ab798993d9d422\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd133f6c1890cb7ca22d011835b074a7838026ef17249d22270b8b0f04ea1d8c\"" May 15 23:53:24.118119 containerd[1492]: time="2025-05-15T23:53:24.117185770Z" level=info msg="StartContainer for \"bd133f6c1890cb7ca22d011835b074a7838026ef17249d22270b8b0f04ea1d8c\"" May 15 23:53:24.153888 systemd[1]: Started cri-containerd-afd035955a21a2817394da032236a80da9dfcbb567d9d00ff65c41192a5d746f.scope - libcontainer container afd035955a21a2817394da032236a80da9dfcbb567d9d00ff65c41192a5d746f. May 15 23:53:24.163535 systemd[1]: Started cri-containerd-bd133f6c1890cb7ca22d011835b074a7838026ef17249d22270b8b0f04ea1d8c.scope - libcontainer container bd133f6c1890cb7ca22d011835b074a7838026ef17249d22270b8b0f04ea1d8c. May 15 23:53:24.291384 kubelet[2173]: E0515 23:53:24.288339 2173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:53:24.446115 containerd[1492]: time="2025-05-15T23:53:24.446029745Z" level=info msg="StartContainer for \"afd035955a21a2817394da032236a80da9dfcbb567d9d00ff65c41192a5d746f\" returns successfully" May 15 23:53:24.446318 containerd[1492]: time="2025-05-15T23:53:24.446242047Z" level=info msg="StartContainer for \"bd133f6c1890cb7ca22d011835b074a7838026ef17249d22270b8b0f04ea1d8c\" returns successfully" May 15 23:53:24.879816 kubelet[2173]: E0515 23:53:24.879381 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:24.879816 kubelet[2173]: E0515 23:53:24.879555 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:24.885251 kubelet[2173]: E0515 23:53:24.884974 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:24.885251 kubelet[2173]: E0515 23:53:24.885215 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:24.885481 kubelet[2173]: E0515 23:53:24.885217 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:24.885481 kubelet[2173]: E0515 23:53:24.885396 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:25.888467 kubelet[2173]: E0515 23:53:25.888400 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:25.889051 kubelet[2173]: E0515 23:53:25.888602 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:25.889051 kubelet[2173]: E0515 23:53:25.888960 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:25.889154 kubelet[2173]: E0515 23:53:25.889113 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:25.889627 kubelet[2173]: E0515 23:53:25.889589 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:25.889791 kubelet[2173]: E0515 23:53:25.889765 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:26.287038 kubelet[2173]: E0515 23:53:26.286938 2173 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 23:53:26.427219 kubelet[2173]: E0515 23:53:26.427139 2173 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:53:26.645691 kubelet[2173]: E0515 23:53:26.645528 2173 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 23:53:26.890585 kubelet[2173]: E0515 23:53:26.890457 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:26.891217 kubelet[2173]: E0515 23:53:26.890609 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:27.007498 kubelet[2173]: I0515 23:53:27.007381 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:27.042865 kubelet[2173]: I0515 23:53:27.042799 2173 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:53:27.042865 kubelet[2173]: E0515 23:53:27.042848 2173 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:53:27.081552 kubelet[2173]: E0515 23:53:27.081488 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.182455 kubelet[2173]: E0515 23:53:27.182190 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.282575 kubelet[2173]: E0515 23:53:27.282411 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.383478 kubelet[2173]: E0515 23:53:27.383414 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.505065 kubelet[2173]: E0515 23:53:27.504989 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.606149 kubelet[2173]: E0515 23:53:27.605944 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.706854 kubelet[2173]: E0515 23:53:27.706758 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.807739 kubelet[2173]: E0515 23:53:27.807662 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:27.908049 kubelet[2173]: E0515 23:53:27.907842 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.008240 kubelet[2173]: E0515 23:53:28.008104 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.108765 kubelet[2173]: E0515 23:53:28.108646 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.209726 kubelet[2173]: E0515 23:53:28.209662 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.310544 kubelet[2173]: E0515 23:53:28.310469 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.411639 kubelet[2173]: E0515 23:53:28.411568 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.511992 kubelet[2173]: E0515 23:53:28.511755 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.612937 kubelet[2173]: E0515 23:53:28.612872 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.714077 kubelet[2173]: E0515 23:53:28.713968 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.817508 kubelet[2173]: E0515 23:53:28.815046 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:28.916256 kubelet[2173]: E0515 23:53:28.916167 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.017392 kubelet[2173]: E0515 23:53:29.017280 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.118612 kubelet[2173]: E0515 23:53:29.118433 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.219708 kubelet[2173]: E0515 23:53:29.219620 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.319919 kubelet[2173]: E0515 23:53:29.319855 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.420749 kubelet[2173]: E0515 23:53:29.420671 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.521878 kubelet[2173]: E0515 23:53:29.521806 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.622124 kubelet[2173]: E0515 23:53:29.622066 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.722711 kubelet[2173]: E0515 23:53:29.722540 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.823570 kubelet[2173]: E0515 23:53:29.823505 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:29.924264 kubelet[2173]: E0515 23:53:29.924088 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.026508 kubelet[2173]: E0515 23:53:30.024315 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.070713 kubelet[2173]: E0515 23:53:30.070667 2173 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:53:30.070861 kubelet[2173]: E0515 23:53:30.070832 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:30.125500 kubelet[2173]: E0515 23:53:30.125436 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.225875 kubelet[2173]: E0515 23:53:30.225792 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.326610 kubelet[2173]: E0515 23:53:30.326435 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.427941 kubelet[2173]: E0515 23:53:30.427839 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.529087 kubelet[2173]: E0515 23:53:30.529008 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:30.595331 kubelet[2173]: I0515 23:53:30.595140 2173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:53:30.763855 kubelet[2173]: I0515 23:53:30.763726 2173 apiserver.go:52] "Watching apiserver" May 15 23:53:30.766317 kubelet[2173]: E0515 23:53:30.766293 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:30.767074 kubelet[2173]: I0515 23:53:30.766883 2173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:53:30.795604 kubelet[2173]: I0515 23:53:30.795528 2173 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:53:31.130214 kubelet[2173]: I0515 23:53:31.130008 2173 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:53:31.130703 kubelet[2173]: E0515 23:53:31.130524 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:31.168078 kubelet[2173]: E0515 23:53:31.168037 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:33.919590 kubelet[2173]: I0515 23:53:33.919465 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.919404955 podStartE2EDuration="3.919404955s" podCreationTimestamp="2025-05-15 23:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:53:33.919235035 +0000 UTC m=+22.405797224" watchObservedRunningTime="2025-05-15 23:53:33.919404955 +0000 UTC m=+22.405967144" May 15 23:53:33.951077 kubelet[2173]: I0515 23:53:33.950982 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.950955032 podStartE2EDuration="3.950955032s" podCreationTimestamp="2025-05-15 23:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:53:33.935640277 +0000 UTC m=+22.422202466" watchObservedRunningTime="2025-05-15 23:53:33.950955032 +0000 UTC m=+22.437517231" May 15 23:53:33.951331 kubelet[2173]: I0515 23:53:33.951108 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.951101809 podStartE2EDuration="2.951101809s" podCreationTimestamp="2025-05-15 23:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:53:33.950837279 +0000 UTC m=+22.437399468" watchObservedRunningTime="2025-05-15 23:53:33.951101809 +0000 UTC m=+22.437664008" May 15 23:53:34.131632 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-5.scope)... May 15 23:53:34.131651 systemd[1]: Reloading... May 15 23:53:34.234392 zram_generator::config[2511]: No configuration found. May 15 23:53:34.373862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:53:34.499612 systemd[1]: Reloading finished in 367 ms. May 15 23:53:34.556233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:53:34.584023 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:53:34.584313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:34.584404 systemd[1]: kubelet.service: Consumed 1.588s CPU time, 136.3M memory peak, 0B memory swap peak. May 15 23:53:34.598125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:53:34.788666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:53:34.804980 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:53:34.863099 kubelet[2554]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:53:34.863099 kubelet[2554]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:53:34.863099 kubelet[2554]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:53:34.863679 kubelet[2554]: I0515 23:53:34.863151 2554 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:53:34.870419 kubelet[2554]: I0515 23:53:34.870378 2554 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:53:34.870419 kubelet[2554]: I0515 23:53:34.870415 2554 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:53:34.870698 kubelet[2554]: I0515 23:53:34.870678 2554 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:53:34.872010 kubelet[2554]: I0515 23:53:34.871979 2554 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:53:34.875759 kubelet[2554]: I0515 23:53:34.875704 2554 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:53:34.881238 kubelet[2554]: E0515 23:53:34.881153 2554 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:53:34.881238 kubelet[2554]: I0515 23:53:34.881238 2554 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:53:34.887187 kubelet[2554]: I0515 23:53:34.887129 2554 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:53:34.887621 kubelet[2554]: I0515 23:53:34.887564 2554 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:53:34.887860 kubelet[2554]: I0515 23:53:34.887612 2554 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:53:34.887964 kubelet[2554]: I0515 23:53:34.887875 2554 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:53:34.887964 kubelet[2554]: I0515 23:53:34.887889 2554 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:53:34.887964 kubelet[2554]: I0515 23:53:34.887963 2554 state_mem.go:36] "Initialized new in-memory state store" May 15 23:53:34.888244 kubelet[2554]: I0515 23:53:34.888214 2554 kubelet.go:446] "Attempting to sync node with API server" May 15 23:53:34.888244 kubelet[2554]: I0515 23:53:34.888242 2554 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:53:34.888307 kubelet[2554]: I0515 23:53:34.888266 2554 kubelet.go:352] "Adding apiserver pod source" May 15 23:53:34.888307 kubelet[2554]: I0515 23:53:34.888280 2554 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:53:34.889640 kubelet[2554]: I0515 23:53:34.889570 2554 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:53:34.890047 kubelet[2554]: I0515 23:53:34.889980 2554 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:53:34.891440 kubelet[2554]: I0515 23:53:34.890787 2554 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:53:34.891440 kubelet[2554]: I0515 23:53:34.890829 2554 server.go:1287] "Started kubelet" May 15 23:53:34.895774 kubelet[2554]: I0515 23:53:34.895702 2554 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:53:34.896111 kubelet[2554]: I0515 23:53:34.896085 2554 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:53:34.901644 kubelet[2554]: I0515 23:53:34.898220 2554 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:53:34.901644 kubelet[2554]: I0515 23:53:34.900018 2554 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:53:34.901644 kubelet[2554]: E0515 23:53:34.900546 2554 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:53:34.901644 kubelet[2554]: I0515 23:53:34.900575 2554 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:53:34.901644 kubelet[2554]: I0515 23:53:34.900781 2554 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:53:34.904315 kubelet[2554]: I0515 23:53:34.904255 2554 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:53:34.904810 kubelet[2554]: I0515 23:53:34.904777 2554 reconciler.go:26] "Reconciler: start to sync state" May 15 23:53:34.907235 kubelet[2554]: I0515 23:53:34.907131 2554 factory.go:221] Registration of the systemd container factory successfully May 15 23:53:34.907413 kubelet[2554]: I0515 23:53:34.907312 2554 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:53:34.908767 kubelet[2554]: I0515 23:53:34.908736 2554 server.go:479] "Adding debug handlers to kubelet server" May 15 23:53:34.912601 kubelet[2554]: I0515 23:53:34.912564 2554 factory.go:221] Registration of the containerd container factory successfully May 15 23:53:34.917139 kubelet[2554]: I0515 23:53:34.917089 2554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:53:34.919560 kubelet[2554]: I0515 23:53:34.919518 2554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:53:34.919665 kubelet[2554]: I0515 23:53:34.919576 2554 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:53:34.919665 kubelet[2554]: I0515 23:53:34.919599 2554 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:53:34.919665 kubelet[2554]: I0515 23:53:34.919607 2554 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:53:34.919665 kubelet[2554]: E0515 23:53:34.919659 2554 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:53:34.919775 kubelet[2554]: E0515 23:53:34.919709 2554 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:53:34.955820 kubelet[2554]: I0515 23:53:34.955734 2554 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:53:34.955820 kubelet[2554]: I0515 23:53:34.955759 2554 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:53:34.955820 kubelet[2554]: I0515 23:53:34.955782 2554 state_mem.go:36] "Initialized new in-memory state store" May 15 23:53:34.956004 kubelet[2554]: I0515 23:53:34.955991 2554 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:53:34.956032 kubelet[2554]: I0515 23:53:34.956004 2554 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:53:34.956059 kubelet[2554]: I0515 23:53:34.956035 2554 policy_none.go:49] "None policy: Start" May 15 23:53:34.956059 kubelet[2554]: I0515 23:53:34.956047 2554 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:53:34.956059 kubelet[2554]: I0515 23:53:34.956060 2554 state_mem.go:35] "Initializing new in-memory state store" May 15 23:53:34.956252 kubelet[2554]: I0515 23:53:34.956207 2554 state_mem.go:75] "Updated machine memory state" May 15 23:53:34.962536 kubelet[2554]: I0515 23:53:34.962396 2554 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:53:34.962708 kubelet[2554]: I0515 23:53:34.962626 2554 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:53:34.962744 kubelet[2554]: I0515 23:53:34.962639 2554 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:53:34.962932 kubelet[2554]: I0515 23:53:34.962868 2554 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:53:34.964569 kubelet[2554]: E0515 23:53:34.964533 2554 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:53:35.020981 kubelet[2554]: I0515 23:53:35.020695 2554 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:53:35.020981 kubelet[2554]: I0515 23:53:35.020836 2554 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.020981 kubelet[2554]: I0515 23:53:35.020976 2554 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:53:35.027095 kubelet[2554]: E0515 23:53:35.027070 2554 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:53:35.028810 kubelet[2554]: E0515 23:53:35.028765 2554 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:53:35.028994 kubelet[2554]: E0515 23:53:35.028927 2554 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.068995 kubelet[2554]: I0515 23:53:35.068822 2554 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:53:35.105583 kubelet[2554]: I0515 23:53:35.105498 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.105583 kubelet[2554]: I0515 23:53:35.105555 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:53:35.105771 kubelet[2554]: I0515 23:53:35.105581 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:35.105771 kubelet[2554]: I0515 23:53:35.105638 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:35.105771 kubelet[2554]: I0515 23:53:35.105658 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.105771 kubelet[2554]: I0515 23:53:35.105673 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b1c8e21c9c4df256b84536412787f07-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b1c8e21c9c4df256b84536412787f07\") " pod="kube-system/kube-apiserver-localhost" May 15 23:53:35.105771 kubelet[2554]: I0515 23:53:35.105694 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.105979 kubelet[2554]: I0515 23:53:35.105709 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.105979 kubelet[2554]: I0515 23:53:35.105724 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:53:35.149130 kubelet[2554]: I0515 23:53:35.149064 2554 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 15 23:53:35.149319 kubelet[2554]: I0515 23:53:35.149184 2554 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:53:35.327600 kubelet[2554]: E0515 23:53:35.327456 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:35.329268 kubelet[2554]: E0515 23:53:35.329198 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:35.329268 kubelet[2554]: E0515 23:53:35.329247 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:35.889303 kubelet[2554]: I0515 23:53:35.889229 2554 apiserver.go:52] "Watching apiserver" May 15 23:53:35.901460 kubelet[2554]: I0515 23:53:35.901395 2554 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:53:35.936405 kubelet[2554]: E0515 23:53:35.935343 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:35.936405 kubelet[2554]: E0515 23:53:35.936184 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:35.936989 kubelet[2554]: E0515 23:53:35.936965 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:36.853996 sudo[1626]: pam_unix(sudo:session): session closed for user root May 15 23:53:36.856176 sshd[1625]: Connection closed by 10.0.0.1 port 42898 May 15 23:53:36.856989 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 15 23:53:36.862369 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:42898.service: Deactivated successfully. May 15 23:53:36.865157 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:53:36.865423 systemd[1]: session-5.scope: Consumed 5.765s CPU time, 154.7M memory peak, 0B memory swap peak. May 15 23:53:36.866021 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. May 15 23:53:36.867074 systemd-logind[1476]: Removed session 5. May 15 23:53:36.939054 kubelet[2554]: E0515 23:53:36.938999 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:36.939671 kubelet[2554]: E0515 23:53:36.939144 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:37.941513 kubelet[2554]: E0515 23:53:37.941465 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:38.302277 kubelet[2554]: I0515 23:53:38.302114 2554 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:53:38.302585 containerd[1492]: time="2025-05-15T23:53:38.302532841Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:53:38.302971 kubelet[2554]: I0515 23:53:38.302949 2554 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:53:38.942826 kubelet[2554]: E0515 23:53:38.942735 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:38.978900 systemd[1]: Created slice kubepods-besteffort-pod8cdd908f_f26c_482e_983b_e9732c91da12.slice - libcontainer container kubepods-besteffort-pod8cdd908f_f26c_482e_983b_e9732c91da12.slice. May 15 23:53:38.993321 systemd[1]: Created slice kubepods-burstable-pod57063f47_a793_4f54_8781_3ece6c33c52f.slice - libcontainer container kubepods-burstable-pod57063f47_a793_4f54_8781_3ece6c33c52f.slice. May 15 23:53:39.033533 kubelet[2554]: I0515 23:53:39.033468 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89kj\" (UniqueName: \"kubernetes.io/projected/8cdd908f-f26c-482e-983b-e9732c91da12-kube-api-access-z89kj\") pod \"kube-proxy-k7klj\" (UID: \"8cdd908f-f26c-482e-983b-e9732c91da12\") " pod="kube-system/kube-proxy-k7klj" May 15 23:53:39.033533 kubelet[2554]: I0515 23:53:39.033518 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgwh\" (UniqueName: \"kubernetes.io/projected/57063f47-a793-4f54-8781-3ece6c33c52f-kube-api-access-6vgwh\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.033744 kubelet[2554]: I0515 23:53:39.033553 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cdd908f-f26c-482e-983b-e9732c91da12-xtables-lock\") pod \"kube-proxy-k7klj\" (UID: \"8cdd908f-f26c-482e-983b-e9732c91da12\") " pod="kube-system/kube-proxy-k7klj" May 15 23:53:39.033744 kubelet[2554]: I0515 23:53:39.033576 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/57063f47-a793-4f54-8781-3ece6c33c52f-cni\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.033744 kubelet[2554]: I0515 23:53:39.033595 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/57063f47-a793-4f54-8781-3ece6c33c52f-cni-plugin\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.033744 kubelet[2554]: I0515 23:53:39.033613 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/57063f47-a793-4f54-8781-3ece6c33c52f-flannel-cfg\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.033744 kubelet[2554]: I0515 23:53:39.033677 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cdd908f-f26c-482e-983b-e9732c91da12-kube-proxy\") pod \"kube-proxy-k7klj\" (UID: \"8cdd908f-f26c-482e-983b-e9732c91da12\") " pod="kube-system/kube-proxy-k7klj" May 15 23:53:39.033877 kubelet[2554]: I0515 23:53:39.033707 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cdd908f-f26c-482e-983b-e9732c91da12-lib-modules\") pod \"kube-proxy-k7klj\" (UID: \"8cdd908f-f26c-482e-983b-e9732c91da12\") " pod="kube-system/kube-proxy-k7klj" May 15 23:53:39.033877 kubelet[2554]: I0515 23:53:39.033731 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/57063f47-a793-4f54-8781-3ece6c33c52f-run\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.033877 kubelet[2554]: I0515 23:53:39.033757 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57063f47-a793-4f54-8781-3ece6c33c52f-xtables-lock\") pod \"kube-flannel-ds-84hfh\" (UID: \"57063f47-a793-4f54-8781-3ece6c33c52f\") " pod="kube-flannel/kube-flannel-ds-84hfh" May 15 23:53:39.140364 kubelet[2554]: E0515 23:53:39.140299 2554 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 23:53:39.140364 kubelet[2554]: E0515 23:53:39.140333 2554 projected.go:194] Error preparing data for projected volume kube-api-access-6vgwh for pod kube-flannel/kube-flannel-ds-84hfh: configmap "kube-root-ca.crt" not found May 15 23:53:39.140561 kubelet[2554]: E0515 23:53:39.140409 2554 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57063f47-a793-4f54-8781-3ece6c33c52f-kube-api-access-6vgwh podName:57063f47-a793-4f54-8781-3ece6c33c52f nodeName:}" failed. No retries permitted until 2025-05-15 23:53:39.640385959 +0000 UTC m=+4.825851682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6vgwh" (UniqueName: "kubernetes.io/projected/57063f47-a793-4f54-8781-3ece6c33c52f-kube-api-access-6vgwh") pod "kube-flannel-ds-84hfh" (UID: "57063f47-a793-4f54-8781-3ece6c33c52f") : configmap "kube-root-ca.crt" not found May 15 23:53:39.140902 kubelet[2554]: E0515 23:53:39.140867 2554 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 23:53:39.141006 kubelet[2554]: E0515 23:53:39.140976 2554 projected.go:194] Error preparing data for projected volume kube-api-access-z89kj for pod kube-system/kube-proxy-k7klj: configmap "kube-root-ca.crt" not found May 15 23:53:39.141079 kubelet[2554]: E0515 23:53:39.141016 2554 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8cdd908f-f26c-482e-983b-e9732c91da12-kube-api-access-z89kj podName:8cdd908f-f26c-482e-983b-e9732c91da12 nodeName:}" failed. No retries permitted until 2025-05-15 23:53:39.641004574 +0000 UTC m=+4.826470297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z89kj" (UniqueName: "kubernetes.io/projected/8cdd908f-f26c-482e-983b-e9732c91da12-kube-api-access-z89kj") pod "kube-proxy-k7klj" (UID: "8cdd908f-f26c-482e-983b-e9732c91da12") : configmap "kube-root-ca.crt" not found May 15 23:53:39.886720 kubelet[2554]: E0515 23:53:39.886649 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:39.887584 containerd[1492]: time="2025-05-15T23:53:39.887533263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7klj,Uid:8cdd908f-f26c-482e-983b-e9732c91da12,Namespace:kube-system,Attempt:0,}" May 15 23:53:39.896643 kubelet[2554]: E0515 23:53:39.896571 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:39.897369 containerd[1492]: time="2025-05-15T23:53:39.897310499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-84hfh,Uid:57063f47-a793-4f54-8781-3ece6c33c52f,Namespace:kube-flannel,Attempt:0,}" May 15 23:53:39.946581 kubelet[2554]: E0515 23:53:39.946533 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:39.991155 kubelet[2554]: E0515 23:53:39.991115 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:40.947907 kubelet[2554]: E0515 23:53:40.947845 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:41.050144 containerd[1492]: time="2025-05-15T23:53:41.050009048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:53:41.050144 containerd[1492]: time="2025-05-15T23:53:41.050091984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:53:41.050144 containerd[1492]: time="2025-05-15T23:53:41.050108385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:41.050647 containerd[1492]: time="2025-05-15T23:53:41.050225616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:41.075610 systemd[1]: Started cri-containerd-ca43e446b1291f79ea6e0b91629917f09048b95d735cc2d0c290a0529c05df59.scope - libcontainer container ca43e446b1291f79ea6e0b91629917f09048b95d735cc2d0c290a0529c05df59. May 15 23:53:41.095385 containerd[1492]: time="2025-05-15T23:53:41.094305153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:53:41.095385 containerd[1492]: time="2025-05-15T23:53:41.094408979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:53:41.095385 containerd[1492]: time="2025-05-15T23:53:41.094425470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:41.095385 containerd[1492]: time="2025-05-15T23:53:41.094518514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:53:41.115803 containerd[1492]: time="2025-05-15T23:53:41.115637314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7klj,Uid:8cdd908f-f26c-482e-983b-e9732c91da12,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca43e446b1291f79ea6e0b91629917f09048b95d735cc2d0c290a0529c05df59\"" May 15 23:53:41.116546 kubelet[2554]: E0515 23:53:41.116517 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:41.121541 systemd[1]: Started cri-containerd-1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e.scope - libcontainer container 1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e. May 15 23:53:41.121821 containerd[1492]: time="2025-05-15T23:53:41.121788655Z" level=info msg="CreateContainer within sandbox \"ca43e446b1291f79ea6e0b91629917f09048b95d735cc2d0c290a0529c05df59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:53:41.165064 containerd[1492]: time="2025-05-15T23:53:41.165002532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-84hfh,Uid:57063f47-a793-4f54-8781-3ece6c33c52f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\"" May 15 23:53:41.166245 kubelet[2554]: E0515 23:53:41.165985 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:41.167217 containerd[1492]: time="2025-05-15T23:53:41.167177406Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 15 23:53:41.210104 containerd[1492]: time="2025-05-15T23:53:41.209948671Z" level=info msg="CreateContainer within sandbox \"ca43e446b1291f79ea6e0b91629917f09048b95d735cc2d0c290a0529c05df59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be7f3a0e12b096ab1b053df89016559dfe6d099ab7749f7c08703b5e8bc017a8\"" May 15 23:53:41.210697 containerd[1492]: time="2025-05-15T23:53:41.210669037Z" level=info msg="StartContainer for \"be7f3a0e12b096ab1b053df89016559dfe6d099ab7749f7c08703b5e8bc017a8\"" May 15 23:53:41.246646 systemd[1]: Started cri-containerd-be7f3a0e12b096ab1b053df89016559dfe6d099ab7749f7c08703b5e8bc017a8.scope - libcontainer container be7f3a0e12b096ab1b053df89016559dfe6d099ab7749f7c08703b5e8bc017a8. May 15 23:53:41.396025 containerd[1492]: time="2025-05-15T23:53:41.395963134Z" level=info msg="StartContainer for \"be7f3a0e12b096ab1b053df89016559dfe6d099ab7749f7c08703b5e8bc017a8\" returns successfully" May 15 23:53:41.951338 kubelet[2554]: E0515 23:53:41.951298 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:41.952101 kubelet[2554]: E0515 23:53:41.952075 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:42.939139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300359863.mount: Deactivated successfully. May 15 23:53:42.986016 containerd[1492]: time="2025-05-15T23:53:42.985915844Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:42.986684 containerd[1492]: time="2025-05-15T23:53:42.986608328Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" May 15 23:53:42.988023 containerd[1492]: time="2025-05-15T23:53:42.987971784Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:42.991844 containerd[1492]: time="2025-05-15T23:53:42.991768440Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:42.992948 containerd[1492]: time="2025-05-15T23:53:42.992901703Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.825682878s" May 15 23:53:42.992948 containerd[1492]: time="2025-05-15T23:53:42.992943772Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 15 23:53:42.995914 containerd[1492]: time="2025-05-15T23:53:42.995863648Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 15 23:53:43.015579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837221926.mount: Deactivated successfully. May 15 23:53:43.017185 containerd[1492]: time="2025-05-15T23:53:43.017125763Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058\"" May 15 23:53:43.017814 containerd[1492]: time="2025-05-15T23:53:43.017787317Z" level=info msg="StartContainer for \"598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058\"" May 15 23:53:43.059582 systemd[1]: Started cri-containerd-598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058.scope - libcontainer container 598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058. May 15 23:53:43.092636 systemd[1]: cri-containerd-598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058.scope: Deactivated successfully. May 15 23:53:43.099737 containerd[1492]: time="2025-05-15T23:53:43.099675186Z" level=info msg="StartContainer for \"598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058\" returns successfully" May 15 23:53:43.118876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058-rootfs.mount: Deactivated successfully. May 15 23:53:43.164248 containerd[1492]: time="2025-05-15T23:53:43.164163301Z" level=info msg="shim disconnected" id=598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058 namespace=k8s.io May 15 23:53:43.164248 containerd[1492]: time="2025-05-15T23:53:43.164245365Z" level=warning msg="cleaning up after shim disconnected" id=598c437f27e880e019d249728aeccd99ef77d8ae1b46e2c27dc75782b8892058 namespace=k8s.io May 15 23:53:43.164248 containerd[1492]: time="2025-05-15T23:53:43.164260253Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:53:43.689682 kubelet[2554]: E0515 23:53:43.689632 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:43.710721 kubelet[2554]: I0515 23:53:43.710633 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k7klj" podStartSLOduration=5.710605168 podStartE2EDuration="5.710605168s" podCreationTimestamp="2025-05-15 23:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:53:41.961884054 +0000 UTC m=+7.147349787" watchObservedRunningTime="2025-05-15 23:53:43.710605168 +0000 UTC m=+8.896070901" May 15 23:53:43.957745 kubelet[2554]: E0515 23:53:43.957586 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:43.957745 kubelet[2554]: E0515 23:53:43.957658 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:43.958802 containerd[1492]: time="2025-05-15T23:53:43.958749912Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 15 23:53:47.211989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249123913.mount: Deactivated successfully. May 15 23:53:50.338014 containerd[1492]: time="2025-05-15T23:53:50.337932593Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:50.411126 containerd[1492]: time="2025-05-15T23:53:50.411034615Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 15 23:53:50.457081 containerd[1492]: time="2025-05-15T23:53:50.457001415Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:50.503967 containerd[1492]: time="2025-05-15T23:53:50.503900667Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:53:50.505276 containerd[1492]: time="2025-05-15T23:53:50.505228483Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 6.546428164s" May 15 23:53:50.505276 containerd[1492]: time="2025-05-15T23:53:50.505271153Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 15 23:53:50.508082 containerd[1492]: time="2025-05-15T23:53:50.508050006Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 23:53:51.273310 containerd[1492]: time="2025-05-15T23:53:51.273226286Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314\"" May 15 23:53:51.274055 containerd[1492]: time="2025-05-15T23:53:51.273996524Z" level=info msg="StartContainer for \"a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314\"" May 15 23:53:51.310582 systemd[1]: Started cri-containerd-a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314.scope - libcontainer container a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314. May 15 23:53:51.339165 systemd[1]: cri-containerd-a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314.scope: Deactivated successfully. May 15 23:53:51.368513 kubelet[2554]: I0515 23:53:51.368260 2554 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 15 23:53:51.810297 systemd[1]: Created slice kubepods-burstable-podd3433948_69b1_481e_b56d_b37b56ae9852.slice - libcontainer container kubepods-burstable-podd3433948_69b1_481e_b56d_b37b56ae9852.slice. May 15 23:53:51.858043 containerd[1492]: time="2025-05-15T23:53:51.857956484Z" level=info msg="StartContainer for \"a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314\" returns successfully" May 15 23:53:51.877811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314-rootfs.mount: Deactivated successfully. May 15 23:53:51.884006 systemd[1]: Created slice kubepods-burstable-pod32969168_e75f_44d4_ac89_b3699f7a99b0.slice - libcontainer container kubepods-burstable-pod32969168_e75f_44d4_ac89_b3699f7a99b0.slice. May 15 23:53:51.916411 kubelet[2554]: I0515 23:53:51.916340 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3433948-69b1-481e-b56d-b37b56ae9852-config-volume\") pod \"coredns-668d6bf9bc-b8c7f\" (UID: \"d3433948-69b1-481e-b56d-b37b56ae9852\") " pod="kube-system/coredns-668d6bf9bc-b8c7f" May 15 23:53:51.916411 kubelet[2554]: I0515 23:53:51.916395 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfmw\" (UniqueName: \"kubernetes.io/projected/d3433948-69b1-481e-b56d-b37b56ae9852-kube-api-access-kbfmw\") pod \"coredns-668d6bf9bc-b8c7f\" (UID: \"d3433948-69b1-481e-b56d-b37b56ae9852\") " pod="kube-system/coredns-668d6bf9bc-b8c7f" May 15 23:53:51.916411 kubelet[2554]: I0515 23:53:51.916423 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32969168-e75f-44d4-ac89-b3699f7a99b0-config-volume\") pod \"coredns-668d6bf9bc-nr62d\" (UID: \"32969168-e75f-44d4-ac89-b3699f7a99b0\") " pod="kube-system/coredns-668d6bf9bc-nr62d" May 15 23:53:51.916758 kubelet[2554]: I0515 23:53:51.916441 2554 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wck\" (UniqueName: \"kubernetes.io/projected/32969168-e75f-44d4-ac89-b3699f7a99b0-kube-api-access-p7wck\") pod \"coredns-668d6bf9bc-nr62d\" (UID: \"32969168-e75f-44d4-ac89-b3699f7a99b0\") " pod="kube-system/coredns-668d6bf9bc-nr62d" May 15 23:53:51.974882 kubelet[2554]: E0515 23:53:51.974829 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:52.114317 kubelet[2554]: E0515 23:53:52.114169 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:52.114918 containerd[1492]: time="2025-05-15T23:53:52.114859553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8c7f,Uid:d3433948-69b1-481e-b56d-b37b56ae9852,Namespace:kube-system,Attempt:0,}" May 15 23:53:52.186832 kubelet[2554]: E0515 23:53:52.186754 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:52.187430 containerd[1492]: time="2025-05-15T23:53:52.187367455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nr62d,Uid:32969168-e75f-44d4-ac89-b3699f7a99b0,Namespace:kube-system,Attempt:0,}" May 15 23:53:52.793794 containerd[1492]: time="2025-05-15T23:53:52.793690144Z" level=info msg="shim disconnected" id=a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314 namespace=k8s.io May 15 23:53:52.793794 containerd[1492]: time="2025-05-15T23:53:52.793774472Z" level=warning msg="cleaning up after shim disconnected" id=a2c1f5b28196c6798d761edaf43dea002d02f2985d59773207fc729bed3df314 namespace=k8s.io May 15 23:53:52.793794 containerd[1492]: time="2025-05-15T23:53:52.793788338Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:53:53.000537 kubelet[2554]: E0515 23:53:53.000330 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:53.002558 containerd[1492]: time="2025-05-15T23:53:53.002413814Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 15 23:53:53.703396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393765590.mount: Deactivated successfully. May 15 23:53:53.877341 containerd[1492]: time="2025-05-15T23:53:53.877257965Z" level=info msg="CreateContainer within sandbox \"1779722798d834b24dad227243f482cdecfc73e06f596354f4ee9db87404b57e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4072c3fa59575b2ab874cff4c20478e7a87b7ee45ea20205e52f125de8989f4d\"" May 15 23:53:53.878204 containerd[1492]: time="2025-05-15T23:53:53.878007813Z" level=info msg="StartContainer for \"4072c3fa59575b2ab874cff4c20478e7a87b7ee45ea20205e52f125de8989f4d\"" May 15 23:53:53.881173 containerd[1492]: time="2025-05-15T23:53:53.881082481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nr62d,Uid:32969168-e75f-44d4-ac89-b3699f7a99b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 23:53:53.881585 kubelet[2554]: E0515 23:53:53.881515 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 23:53:53.881681 kubelet[2554]: E0515 23:53:53.881656 2554 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nr62d" May 15 23:53:53.881759 kubelet[2554]: E0515 23:53:53.881716 2554 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nr62d" May 15 23:53:53.882072 kubelet[2554]: E0515 23:53:53.881860 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nr62d_kube-system(32969168-e75f-44d4-ac89-b3699f7a99b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nr62d_kube-system(32969168-e75f-44d4-ac89-b3699f7a99b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-nr62d" podUID="32969168-e75f-44d4-ac89-b3699f7a99b0" May 15 23:53:53.901454 containerd[1492]: time="2025-05-15T23:53:53.901241328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8c7f,Uid:d3433948-69b1-481e-b56d-b37b56ae9852,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 23:53:53.902388 kubelet[2554]: E0515 23:53:53.901856 2554 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 15 23:53:53.902388 kubelet[2554]: E0515 23:53:53.901952 2554 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b8c7f" May 15 23:53:53.902388 kubelet[2554]: E0515 23:53:53.901983 2554 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b8c7f" May 15 23:53:53.902388 kubelet[2554]: E0515 23:53:53.902046 2554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b8c7f_kube-system(d3433948-69b1-481e-b56d-b37b56ae9852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b8c7f_kube-system(d3433948-69b1-481e-b56d-b37b56ae9852)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-b8c7f" podUID="d3433948-69b1-481e-b56d-b37b56ae9852" May 15 23:53:53.919592 systemd[1]: Started cri-containerd-4072c3fa59575b2ab874cff4c20478e7a87b7ee45ea20205e52f125de8989f4d.scope - libcontainer container 4072c3fa59575b2ab874cff4c20478e7a87b7ee45ea20205e52f125de8989f4d. May 15 23:53:53.954771 containerd[1492]: time="2025-05-15T23:53:53.953607860Z" level=info msg="StartContainer for \"4072c3fa59575b2ab874cff4c20478e7a87b7ee45ea20205e52f125de8989f4d\" returns successfully" May 15 23:53:54.004837 kubelet[2554]: E0515 23:53:54.004780 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:54.020264 kubelet[2554]: I0515 23:53:54.019775 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-84hfh" podStartSLOduration=6.679849106 podStartE2EDuration="16.019745793s" podCreationTimestamp="2025-05-15 23:53:38 +0000 UTC" firstStartedPulling="2025-05-15 23:53:41.166617132 +0000 UTC m=+6.352082855" lastFinishedPulling="2025-05-15 23:53:50.506513818 +0000 UTC m=+15.691979542" observedRunningTime="2025-05-15 23:53:54.019542612 +0000 UTC m=+19.205008345" watchObservedRunningTime="2025-05-15 23:53:54.019745793 +0000 UTC m=+19.205211527" May 15 23:53:54.701288 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72c5553e53e4c1fe03b79ecb167284811f51ec8656a9c40cb16ca8597c75decb-shm.mount: Deactivated successfully. May 15 23:53:54.701446 systemd[1]: run-netns-cni\x2df77862d7\x2d9727\x2d366f\x2d980e\x2ddf9146704199.mount: Deactivated successfully. May 15 23:53:54.701548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e99f7e79b890acd60578dc8e4f6bc82d22db22934c718fad4700e82876fa9b4-shm.mount: Deactivated successfully. May 15 23:53:55.010417 kubelet[2554]: E0515 23:53:55.010244 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:53:55.183569 systemd-networkd[1405]: flannel.1: Link UP May 15 23:53:55.183581 systemd-networkd[1405]: flannel.1: Gained carrier May 15 23:53:56.534238 systemd-networkd[1405]: flannel.1: Gained IPv6LL May 15 23:54:00.201478 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). May 15 23:54:00.242653 sshd[3202]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:00.244679 sshd-session[3202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:00.249765 systemd-logind[1476]: New session 6 of user core. May 15 23:54:00.257525 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:54:00.386221 sshd[3210]: Connection closed by 10.0.0.1 port 37652 May 15 23:54:00.386714 sshd-session[3202]: pam_unix(sshd:session): session closed for user core May 15 23:54:00.391555 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:37652.service: Deactivated successfully. May 15 23:54:00.393981 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:54:00.394751 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. May 15 23:54:00.395758 systemd-logind[1476]: Removed session 6. May 15 23:54:05.399108 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:37668.service - OpenSSH per-connection server daemon (10.0.0.1:37668). May 15 23:54:05.439479 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 37668 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:05.441230 sshd-session[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:05.446313 systemd-logind[1476]: New session 7 of user core. May 15 23:54:05.455576 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:54:05.577374 sshd[3261]: Connection closed by 10.0.0.1 port 37668 May 15 23:54:05.577807 sshd-session[3259]: pam_unix(sshd:session): session closed for user core May 15 23:54:05.582172 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:37668.service: Deactivated successfully. May 15 23:54:05.585184 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:54:05.585996 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. May 15 23:54:05.587202 systemd-logind[1476]: Removed session 7. May 15 23:54:05.920640 kubelet[2554]: E0515 23:54:05.920544 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:05.921376 containerd[1492]: time="2025-05-15T23:54:05.921102263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nr62d,Uid:32969168-e75f-44d4-ac89-b3699f7a99b0,Namespace:kube-system,Attempt:0,}" May 15 23:54:05.964334 systemd-networkd[1405]: cni0: Link UP May 15 23:54:05.964373 systemd-networkd[1405]: cni0: Gained carrier May 15 23:54:05.969220 systemd-networkd[1405]: cni0: Lost carrier May 15 23:54:05.976496 systemd-networkd[1405]: vetha0ad588a: Link UP May 15 23:54:05.979006 kernel: cni0: port 1(vetha0ad588a) entered blocking state May 15 23:54:05.979092 kernel: cni0: port 1(vetha0ad588a) entered disabled state May 15 23:54:05.979952 kernel: vetha0ad588a: entered allmulticast mode May 15 23:54:05.980797 kernel: vetha0ad588a: entered promiscuous mode May 15 23:54:05.982274 kernel: cni0: port 1(vetha0ad588a) entered blocking state May 15 23:54:05.982308 kernel: cni0: port 1(vetha0ad588a) entered forwarding state May 15 23:54:05.984601 kernel: cni0: port 1(vetha0ad588a) entered disabled state May 15 23:54:05.995086 kernel: cni0: port 1(vetha0ad588a) entered blocking state May 15 23:54:05.995218 kernel: cni0: port 1(vetha0ad588a) entered forwarding state May 15 23:54:05.994974 systemd-networkd[1405]: vetha0ad588a: Gained carrier May 15 23:54:05.995328 systemd-networkd[1405]: cni0: Gained carrier May 15 23:54:05.999514 containerd[1492]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008a8d8), "name":"cbr0", "type":"bridge"} May 15 23:54:05.999514 containerd[1492]: delegateAdd: netconf sent to delegate plugin: May 15 23:54:06.025908 containerd[1492]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-15T23:54:06.025770312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:54:06.025908 containerd[1492]: time="2025-05-15T23:54:06.025864440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:54:06.025908 containerd[1492]: time="2025-05-15T23:54:06.025879718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:54:06.026140 containerd[1492]: time="2025-05-15T23:54:06.025998120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:54:06.053646 systemd[1]: Started cri-containerd-3bb4ec4c502124aa69b49bae6fe584963495ab0b4002966aa6d9724c11669725.scope - libcontainer container 3bb4ec4c502124aa69b49bae6fe584963495ab0b4002966aa6d9724c11669725. May 15 23:54:06.070394 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:54:06.100122 containerd[1492]: time="2025-05-15T23:54:06.100071784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nr62d,Uid:32969168-e75f-44d4-ac89-b3699f7a99b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb4ec4c502124aa69b49bae6fe584963495ab0b4002966aa6d9724c11669725\"" May 15 23:54:06.101879 kubelet[2554]: E0515 23:54:06.101284 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:06.103645 containerd[1492]: time="2025-05-15T23:54:06.103607521Z" level=info msg="CreateContainer within sandbox \"3bb4ec4c502124aa69b49bae6fe584963495ab0b4002966aa6d9724c11669725\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:54:06.122066 containerd[1492]: time="2025-05-15T23:54:06.121997080Z" level=info msg="CreateContainer within sandbox \"3bb4ec4c502124aa69b49bae6fe584963495ab0b4002966aa6d9724c11669725\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34f1303b6babf559fad91fcfb95e8f07d5a67a75cb1d3e31dd968bcd5b8ea4b8\"" May 15 23:54:06.123979 containerd[1492]: time="2025-05-15T23:54:06.122738211Z" level=info msg="StartContainer for \"34f1303b6babf559fad91fcfb95e8f07d5a67a75cb1d3e31dd968bcd5b8ea4b8\"" May 15 23:54:06.153873 systemd[1]: Started cri-containerd-34f1303b6babf559fad91fcfb95e8f07d5a67a75cb1d3e31dd968bcd5b8ea4b8.scope - libcontainer container 34f1303b6babf559fad91fcfb95e8f07d5a67a75cb1d3e31dd968bcd5b8ea4b8. May 15 23:54:06.193381 containerd[1492]: time="2025-05-15T23:54:06.193173605Z" level=info msg="StartContainer for \"34f1303b6babf559fad91fcfb95e8f07d5a67a75cb1d3e31dd968bcd5b8ea4b8\" returns successfully" May 15 23:54:07.039075 kubelet[2554]: E0515 23:54:07.039021 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:07.051676 kubelet[2554]: I0515 23:54:07.051588 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nr62d" podStartSLOduration=28.051563613 podStartE2EDuration="28.051563613s" podCreationTimestamp="2025-05-15 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:54:07.051223375 +0000 UTC m=+32.236689098" watchObservedRunningTime="2025-05-15 23:54:07.051563613 +0000 UTC m=+32.237029337" May 15 23:54:07.669589 systemd-networkd[1405]: cni0: Gained IPv6LL May 15 23:54:07.733565 systemd-networkd[1405]: vetha0ad588a: Gained IPv6LL May 15 23:54:08.041404 kubelet[2554]: E0515 23:54:08.041223 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:08.921747 kubelet[2554]: E0515 23:54:08.921603 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:08.922195 containerd[1492]: time="2025-05-15T23:54:08.922133211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8c7f,Uid:d3433948-69b1-481e-b56d-b37b56ae9852,Namespace:kube-system,Attempt:0,}" May 15 23:54:08.952399 systemd-networkd[1405]: vethfede8df6: Link UP May 15 23:54:08.954445 kernel: cni0: port 2(vethfede8df6) entered blocking state May 15 23:54:08.954502 kernel: cni0: port 2(vethfede8df6) entered disabled state May 15 23:54:08.955392 kernel: vethfede8df6: entered allmulticast mode May 15 23:54:08.955449 kernel: vethfede8df6: entered promiscuous mode May 15 23:54:08.964053 kernel: cni0: port 2(vethfede8df6) entered blocking state May 15 23:54:08.964152 kernel: cni0: port 2(vethfede8df6) entered forwarding state May 15 23:54:08.964318 systemd-networkd[1405]: vethfede8df6: Gained carrier May 15 23:54:08.970946 containerd[1492]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} May 15 23:54:08.970946 containerd[1492]: delegateAdd: netconf sent to delegate plugin: May 15 23:54:08.998015 containerd[1492]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-15T23:54:08.997762862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:54:08.998015 containerd[1492]: time="2025-05-15T23:54:08.997847922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:54:08.998015 containerd[1492]: time="2025-05-15T23:54:08.997860696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:54:08.998015 containerd[1492]: time="2025-05-15T23:54:08.997956385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:54:09.030315 systemd[1]: Started cri-containerd-a9b6348de019431b6fae9e9fd95fc418a67f154b9c69c521acf5f4849c622f2c.scope - libcontainer container a9b6348de019431b6fae9e9fd95fc418a67f154b9c69c521acf5f4849c622f2c. May 15 23:54:09.044689 kubelet[2554]: E0515 23:54:09.044630 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:09.052719 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:54:09.081208 containerd[1492]: time="2025-05-15T23:54:09.081156360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b8c7f,Uid:d3433948-69b1-481e-b56d-b37b56ae9852,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b6348de019431b6fae9e9fd95fc418a67f154b9c69c521acf5f4849c622f2c\"" May 15 23:54:09.082196 kubelet[2554]: E0515 23:54:09.082154 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:09.083973 containerd[1492]: time="2025-05-15T23:54:09.083932681Z" level=info msg="CreateContainer within sandbox \"a9b6348de019431b6fae9e9fd95fc418a67f154b9c69c521acf5f4849c622f2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:54:09.170873 containerd[1492]: time="2025-05-15T23:54:09.170804116Z" level=info msg="CreateContainer within sandbox \"a9b6348de019431b6fae9e9fd95fc418a67f154b9c69c521acf5f4849c622f2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ba2adcc80396d8e24cf67843e51322b40f196768a9810a3794db9af980ceed3\"" May 15 23:54:09.171594 containerd[1492]: time="2025-05-15T23:54:09.171458493Z" level=info msg="StartContainer for \"0ba2adcc80396d8e24cf67843e51322b40f196768a9810a3794db9af980ceed3\"" May 15 23:54:09.204660 systemd[1]: Started cri-containerd-0ba2adcc80396d8e24cf67843e51322b40f196768a9810a3794db9af980ceed3.scope - libcontainer container 0ba2adcc80396d8e24cf67843e51322b40f196768a9810a3794db9af980ceed3. May 15 23:54:09.372330 containerd[1492]: time="2025-05-15T23:54:09.372265530Z" level=info msg="StartContainer for \"0ba2adcc80396d8e24cf67843e51322b40f196768a9810a3794db9af980ceed3\" returns successfully" May 15 23:54:10.047196 kubelet[2554]: E0515 23:54:10.047160 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:10.083574 kubelet[2554]: I0515 23:54:10.083496 2554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b8c7f" podStartSLOduration=31.083471113 podStartE2EDuration="31.083471113s" podCreationTimestamp="2025-05-15 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:54:10.083019676 +0000 UTC m=+35.268485399" watchObservedRunningTime="2025-05-15 23:54:10.083471113 +0000 UTC m=+35.268936836" May 15 23:54:10.592894 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:37554.service - OpenSSH per-connection server daemon (10.0.0.1:37554). May 15 23:54:10.636509 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 37554 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:10.639025 sshd-session[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:10.644838 systemd-logind[1476]: New session 8 of user core. May 15 23:54:10.656706 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:54:10.773570 sshd[3547]: Connection closed by 10.0.0.1 port 37554 May 15 23:54:10.773930 sshd-session[3545]: pam_unix(sshd:session): session closed for user core May 15 23:54:10.778492 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:37554.service: Deactivated successfully. May 15 23:54:10.780894 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:54:10.781767 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. May 15 23:54:10.783036 systemd-logind[1476]: Removed session 8. May 15 23:54:10.869562 systemd-networkd[1405]: vethfede8df6: Gained IPv6LL May 15 23:54:11.049088 kubelet[2554]: E0515 23:54:11.049047 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:12.051335 kubelet[2554]: E0515 23:54:12.051277 2554 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:54:15.785374 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:37560.service - OpenSSH per-connection server daemon (10.0.0.1:37560). May 15 23:54:15.823139 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 37560 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:15.825565 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:15.830653 systemd-logind[1476]: New session 9 of user core. May 15 23:54:15.841545 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:54:15.959138 sshd[3587]: Connection closed by 10.0.0.1 port 37560 May 15 23:54:15.959719 sshd-session[3585]: pam_unix(sshd:session): session closed for user core May 15 23:54:15.971057 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:37560.service: Deactivated successfully. May 15 23:54:15.973396 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:54:15.975485 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. May 15 23:54:15.976944 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:37562.service - OpenSSH per-connection server daemon (10.0.0.1:37562). May 15 23:54:15.977889 systemd-logind[1476]: Removed session 9. May 15 23:54:16.014887 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 37562 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:16.016556 sshd-session[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:16.020844 systemd-logind[1476]: New session 10 of user core. May 15 23:54:16.028520 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:54:16.185554 sshd[3603]: Connection closed by 10.0.0.1 port 37562 May 15 23:54:16.188190 sshd-session[3601]: pam_unix(sshd:session): session closed for user core May 15 23:54:16.204401 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:37562.service: Deactivated successfully. May 15 23:54:16.207798 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:54:16.211513 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. May 15 23:54:16.220996 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:37570.service - OpenSSH per-connection server daemon (10.0.0.1:37570). May 15 23:54:16.222739 systemd-logind[1476]: Removed session 10. May 15 23:54:16.260797 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 37570 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:16.262683 sshd-session[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:16.267890 systemd-logind[1476]: New session 11 of user core. May 15 23:54:16.278687 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:54:16.394159 sshd[3616]: Connection closed by 10.0.0.1 port 37570 May 15 23:54:16.394606 sshd-session[3614]: pam_unix(sshd:session): session closed for user core May 15 23:54:16.398966 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:37570.service: Deactivated successfully. May 15 23:54:16.400992 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:54:16.401661 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. May 15 23:54:16.402729 systemd-logind[1476]: Removed session 11. May 15 23:54:21.408130 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:52320.service - OpenSSH per-connection server daemon (10.0.0.1:52320). May 15 23:54:21.451889 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 52320 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:21.454292 sshd-session[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:21.460717 systemd-logind[1476]: New session 12 of user core. May 15 23:54:21.467696 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:54:21.589580 sshd[3651]: Connection closed by 10.0.0.1 port 52320 May 15 23:54:21.590042 sshd-session[3649]: pam_unix(sshd:session): session closed for user core May 15 23:54:21.595235 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:52320.service: Deactivated successfully. May 15 23:54:21.597692 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:54:21.598528 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. May 15 23:54:21.599883 systemd-logind[1476]: Removed session 12. May 15 23:54:26.604457 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:52324.service - OpenSSH per-connection server daemon (10.0.0.1:52324). May 15 23:54:26.653012 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 52324 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:26.654883 sshd-session[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:26.659765 systemd-logind[1476]: New session 13 of user core. May 15 23:54:26.669608 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:54:26.793265 sshd[3687]: Connection closed by 10.0.0.1 port 52324 May 15 23:54:26.793732 sshd-session[3685]: pam_unix(sshd:session): session closed for user core May 15 23:54:26.801980 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:52324.service: Deactivated successfully. May 15 23:54:26.804086 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:54:26.806085 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. May 15 23:54:26.814647 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:52330.service - OpenSSH per-connection server daemon (10.0.0.1:52330). May 15 23:54:26.815749 systemd-logind[1476]: Removed session 13. May 15 23:54:26.859384 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 52330 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:26.861329 sshd-session[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:26.866145 systemd-logind[1476]: New session 14 of user core. May 15 23:54:26.873492 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:54:27.204901 sshd[3701]: Connection closed by 10.0.0.1 port 52330 May 15 23:54:27.205543 sshd-session[3699]: pam_unix(sshd:session): session closed for user core May 15 23:54:27.217038 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:52330.service: Deactivated successfully. May 15 23:54:27.221268 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:54:27.224207 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. May 15 23:54:27.234917 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:52332.service - OpenSSH per-connection server daemon (10.0.0.1:52332). May 15 23:54:27.236297 systemd-logind[1476]: Removed session 14. May 15 23:54:27.275926 sshd[3711]: Accepted publickey for core from 10.0.0.1 port 52332 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:27.278559 sshd-session[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:27.285475 systemd-logind[1476]: New session 15 of user core. May 15 23:54:27.295667 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:54:28.344506 sshd[3713]: Connection closed by 10.0.0.1 port 52332 May 15 23:54:28.346762 sshd-session[3711]: pam_unix(sshd:session): session closed for user core May 15 23:54:28.356903 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:52332.service: Deactivated successfully. May 15 23:54:28.359643 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:54:28.362758 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. May 15 23:54:28.381282 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:60674.service - OpenSSH per-connection server daemon (10.0.0.1:60674). May 15 23:54:28.382183 systemd-logind[1476]: Removed session 15. May 15 23:54:28.418272 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 60674 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:28.420323 sshd-session[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:28.426221 systemd-logind[1476]: New session 16 of user core. May 15 23:54:28.437610 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:54:28.655990 sshd[3734]: Connection closed by 10.0.0.1 port 60674 May 15 23:54:28.654537 sshd-session[3732]: pam_unix(sshd:session): session closed for user core May 15 23:54:28.666620 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:60674.service: Deactivated successfully. May 15 23:54:28.669075 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:54:28.670693 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. May 15 23:54:28.679691 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:60684.service - OpenSSH per-connection server daemon (10.0.0.1:60684). May 15 23:54:28.680750 systemd-logind[1476]: Removed session 16. May 15 23:54:28.713467 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 60684 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:28.715109 sshd-session[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:28.719822 systemd-logind[1476]: New session 17 of user core. May 15 23:54:28.737647 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:54:28.857041 sshd[3746]: Connection closed by 10.0.0.1 port 60684 May 15 23:54:28.857624 sshd-session[3744]: pam_unix(sshd:session): session closed for user core May 15 23:54:28.862892 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:60684.service: Deactivated successfully. May 15 23:54:28.865800 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:54:28.866574 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. May 15 23:54:28.867604 systemd-logind[1476]: Removed session 17. May 15 23:54:33.870331 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). May 15 23:54:33.912568 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:33.914655 sshd-session[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:33.919744 systemd-logind[1476]: New session 18 of user core. May 15 23:54:33.928620 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:54:34.042555 sshd[3782]: Connection closed by 10.0.0.1 port 60694 May 15 23:54:34.042969 sshd-session[3780]: pam_unix(sshd:session): session closed for user core May 15 23:54:34.047195 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:60694.service: Deactivated successfully. May 15 23:54:34.049469 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:54:34.050286 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. May 15 23:54:34.051593 systemd-logind[1476]: Removed session 18. May 15 23:54:39.061998 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:43150.service - OpenSSH per-connection server daemon (10.0.0.1:43150). May 15 23:54:39.111338 sshd[3821]: Accepted publickey for core from 10.0.0.1 port 43150 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:39.113483 sshd-session[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:39.118748 systemd-logind[1476]: New session 19 of user core. May 15 23:54:39.128563 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:54:39.246278 sshd[3823]: Connection closed by 10.0.0.1 port 43150 May 15 23:54:39.246747 sshd-session[3821]: pam_unix(sshd:session): session closed for user core May 15 23:54:39.250005 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:43150.service: Deactivated successfully. May 15 23:54:39.252700 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:54:39.254490 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. May 15 23:54:39.256001 systemd-logind[1476]: Removed session 19. May 15 23:54:44.260309 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:43156.service - OpenSSH per-connection server daemon (10.0.0.1:43156). May 15 23:54:44.301845 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 43156 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:44.303887 sshd-session[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:44.307975 systemd-logind[1476]: New session 20 of user core. May 15 23:54:44.316592 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:54:44.460554 sshd[3860]: Connection closed by 10.0.0.1 port 43156 May 15 23:54:44.461040 sshd-session[3858]: pam_unix(sshd:session): session closed for user core May 15 23:54:44.465831 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:43156.service: Deactivated successfully. May 15 23:54:44.468418 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:54:44.469164 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. May 15 23:54:44.470313 systemd-logind[1476]: Removed session 20. May 15 23:54:49.473288 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:54176.service - OpenSSH per-connection server daemon (10.0.0.1:54176). May 15 23:54:49.515997 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:FeZvce+9VBoccDWFJvRImGX99S3MjULj8flEj2AChUk May 15 23:54:49.518319 sshd-session[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:54:49.526038 systemd-logind[1476]: New session 21 of user core. May 15 23:54:49.535778 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:54:49.684444 sshd[3895]: Connection closed by 10.0.0.1 port 54176 May 15 23:54:49.685187 sshd-session[3893]: pam_unix(sshd:session): session closed for user core May 15 23:54:49.688875 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:54176.service: Deactivated successfully. May 15 23:54:49.692024 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:54:49.695080 systemd-logind[1476]: Session 21 logged out. Waiting for processes to exit. May 15 23:54:49.696498 systemd-logind[1476]: Removed session 21.