May 12 23:50:08.921107 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 12 23:50:08.921132 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon May 12 22:21:23 -00 2025 May 12 23:50:08.921142 kernel: KASLR enabled May 12 23:50:08.921147 kernel: efi: EFI v2.7 by EDK II May 12 23:50:08.921153 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 12 23:50:08.921159 kernel: random: crng init done May 12 23:50:08.921166 kernel: secureboot: Secure boot disabled May 12 23:50:08.921171 kernel: ACPI: Early table checksum verification disabled May 12 23:50:08.921177 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 12 23:50:08.921186 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 12 23:50:08.921192 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921197 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921203 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921209 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921217 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921224 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921230 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921237 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921243 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:50:08.921249 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 12 23:50:08.921255 kernel: NUMA: Failed to initialise from firmware May 12 23:50:08.921261 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:50:08.921267 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 12 23:50:08.921273 kernel: Zone ranges: May 12 23:50:08.921280 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:50:08.921287 kernel: DMA32 empty May 12 23:50:08.921293 kernel: Normal empty May 12 23:50:08.921299 kernel: Movable zone start for each node May 12 23:50:08.921305 kernel: Early memory node ranges May 12 23:50:08.921311 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 12 23:50:08.921317 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 12 23:50:08.921324 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 12 23:50:08.921330 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 12 23:50:08.921336 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 12 23:50:08.921342 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 12 23:50:08.921348 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 12 23:50:08.921354 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:50:08.921362 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 12 23:50:08.921368 kernel: psci: probing for conduit method from ACPI. May 12 23:50:08.921374 kernel: psci: PSCIv1.1 detected in firmware. May 12 23:50:08.921383 kernel: psci: Using standard PSCI v0.2 function IDs May 12 23:50:08.921389 kernel: psci: Trusted OS migration not required May 12 23:50:08.921396 kernel: psci: SMC Calling Convention v1.1 May 12 23:50:08.921404 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 12 23:50:08.921410 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 12 23:50:08.921417 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 12 23:50:08.921423 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 12 23:50:08.921430 kernel: Detected PIPT I-cache on CPU0 May 12 23:50:08.921436 kernel: CPU features: detected: GIC system register CPU interface May 12 23:50:08.921443 kernel: CPU features: detected: Hardware dirty bit management May 12 23:50:08.921449 kernel: CPU features: detected: Spectre-v4 May 12 23:50:08.921464 kernel: CPU features: detected: Spectre-BHB May 12 23:50:08.921488 kernel: CPU features: kernel page table isolation forced ON by KASLR May 12 23:50:08.921498 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 12 23:50:08.921505 kernel: CPU features: detected: ARM erratum 1418040 May 12 23:50:08.921511 kernel: CPU features: detected: SSBS not fully self-synchronizing May 12 23:50:08.921518 kernel: alternatives: applying boot alternatives May 12 23:50:08.921525 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 12 23:50:08.921532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 23:50:08.921539 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 23:50:08.921545 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 23:50:08.921552 kernel: Fallback order for Node 0: 0 May 12 23:50:08.921559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 12 23:50:08.921565 kernel: Policy zone: DMA May 12 23:50:08.921573 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 23:50:08.921579 kernel: software IO TLB: area num 4. May 12 23:50:08.921586 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 12 23:50:08.921596 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 12 23:50:08.921604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 23:50:08.921610 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 23:50:08.921618 kernel: rcu: RCU event tracing is enabled. May 12 23:50:08.921625 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 23:50:08.921632 kernel: Trampoline variant of Tasks RCU enabled. May 12 23:50:08.921639 kernel: Tracing variant of Tasks RCU enabled. May 12 23:50:08.921646 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 23:50:08.921652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 23:50:08.921661 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 12 23:50:08.921667 kernel: GICv3: 256 SPIs implemented May 12 23:50:08.921673 kernel: GICv3: 0 Extended SPIs implemented May 12 23:50:08.921680 kernel: Root IRQ handler: gic_handle_irq May 12 23:50:08.921686 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 12 23:50:08.921693 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 12 23:50:08.921699 kernel: ITS [mem 0x08080000-0x0809ffff] May 12 23:50:08.921706 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 12 23:50:08.921713 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 12 23:50:08.921719 kernel: GICv3: using LPI property table @0x00000000400f0000 May 12 23:50:08.921729 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 12 23:50:08.921736 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 23:50:08.921743 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:50:08.921750 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 12 23:50:08.921757 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 12 23:50:08.921764 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 12 23:50:08.921770 kernel: arm-pv: using stolen time PV May 12 23:50:08.921777 kernel: Console: colour dummy device 80x25 May 12 23:50:08.921784 kernel: ACPI: Core revision 20230628 May 12 23:50:08.921791 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 12 23:50:08.921798 kernel: pid_max: default: 32768 minimum: 301 May 12 23:50:08.921807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 23:50:08.921819 kernel: landlock: Up and running. May 12 23:50:08.921826 kernel: SELinux: Initializing. May 12 23:50:08.921832 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 23:50:08.921839 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 23:50:08.921846 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 12 23:50:08.921853 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 23:50:08.921860 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 23:50:08.921867 kernel: rcu: Hierarchical SRCU implementation. May 12 23:50:08.921876 kernel: rcu: Max phase no-delay instances is 400. May 12 23:50:08.921883 kernel: Platform MSI: ITS@0x8080000 domain created May 12 23:50:08.921889 kernel: PCI/MSI: ITS@0x8080000 domain created May 12 23:50:08.921896 kernel: Remapping and enabling EFI services. May 12 23:50:08.921903 kernel: smp: Bringing up secondary CPUs ... May 12 23:50:08.921910 kernel: Detected PIPT I-cache on CPU1 May 12 23:50:08.921917 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 12 23:50:08.921924 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 12 23:50:08.921931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:50:08.921938 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 12 23:50:08.921946 kernel: Detected PIPT I-cache on CPU2 May 12 23:50:08.921954 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 12 23:50:08.921965 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 12 23:50:08.921974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:50:08.921981 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 12 23:50:08.921988 kernel: Detected PIPT I-cache on CPU3 May 12 23:50:08.921995 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 12 23:50:08.922002 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 12 23:50:08.922009 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:50:08.922016 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 12 23:50:08.922026 kernel: smp: Brought up 1 node, 4 CPUs May 12 23:50:08.922033 kernel: SMP: Total of 4 processors activated. May 12 23:50:08.922041 kernel: CPU features: detected: 32-bit EL0 Support May 12 23:50:08.922048 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 12 23:50:08.922055 kernel: CPU features: detected: Common not Private translations May 12 23:50:08.922062 kernel: CPU features: detected: CRC32 instructions May 12 23:50:08.922069 kernel: CPU features: detected: Enhanced Virtualization Traps May 12 23:50:08.922089 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 12 23:50:08.922096 kernel: CPU features: detected: LSE atomic instructions May 12 23:50:08.922103 kernel: CPU features: detected: Privileged Access Never May 12 23:50:08.922110 kernel: CPU features: detected: RAS Extension Support May 12 23:50:08.922117 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 12 23:50:08.922124 kernel: CPU: All CPU(s) started at EL1 May 12 23:50:08.922132 kernel: alternatives: applying system-wide alternatives May 12 23:50:08.922139 kernel: devtmpfs: initialized May 12 23:50:08.922146 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 23:50:08.922155 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 23:50:08.922162 kernel: pinctrl core: initialized pinctrl subsystem May 12 23:50:08.922169 kernel: SMBIOS 3.0.0 present. May 12 23:50:08.922176 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 12 23:50:08.922184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 23:50:08.922191 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 12 23:50:08.922198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 12 23:50:08.922205 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 12 23:50:08.922212 kernel: audit: initializing netlink subsys (disabled) May 12 23:50:08.922223 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 12 23:50:08.922230 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 23:50:08.922237 kernel: cpuidle: using governor menu May 12 23:50:08.922244 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 12 23:50:08.922252 kernel: ASID allocator initialised with 32768 entries May 12 23:50:08.922259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 23:50:08.922266 kernel: Serial: AMBA PL011 UART driver May 12 23:50:08.922273 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 12 23:50:08.922280 kernel: Modules: 0 pages in range for non-PLT usage May 12 23:50:08.922289 kernel: Modules: 508944 pages in range for PLT usage May 12 23:50:08.922296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 23:50:08.922303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 12 23:50:08.922311 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 12 23:50:08.922318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 12 23:50:08.922325 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 23:50:08.922332 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 12 23:50:08.922339 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 12 23:50:08.922346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 12 23:50:08.922354 kernel: ACPI: Added _OSI(Module Device) May 12 23:50:08.922361 kernel: ACPI: Added _OSI(Processor Device) May 12 23:50:08.922369 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 23:50:08.922376 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 23:50:08.922383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 23:50:08.922390 kernel: ACPI: Interpreter enabled May 12 23:50:08.922397 kernel: ACPI: Using GIC for interrupt routing May 12 23:50:08.922404 kernel: ACPI: MCFG table detected, 1 entries May 12 23:50:08.922411 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 12 23:50:08.922419 kernel: printk: console [ttyAMA0] enabled May 12 23:50:08.922426 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 23:50:08.922613 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 23:50:08.922689 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 12 23:50:08.922752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 12 23:50:08.922819 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 12 23:50:08.922884 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 12 23:50:08.922897 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 12 23:50:08.922904 kernel: PCI host bridge to bus 0000:00 May 12 23:50:08.922989 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 12 23:50:08.923048 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 12 23:50:08.923105 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 12 23:50:08.923160 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 23:50:08.923236 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 12 23:50:08.923313 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 12 23:50:08.923378 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 12 23:50:08.923442 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 12 23:50:08.923529 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 12 23:50:08.923606 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 12 23:50:08.923678 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 12 23:50:08.923747 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 12 23:50:08.923816 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 12 23:50:08.923881 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 12 23:50:08.923938 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 12 23:50:08.923947 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 12 23:50:08.923955 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 12 23:50:08.923962 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 12 23:50:08.923969 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 12 23:50:08.923977 kernel: iommu: Default domain type: Translated May 12 23:50:08.923986 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 12 23:50:08.923993 kernel: efivars: Registered efivars operations May 12 23:50:08.924001 kernel: vgaarb: loaded May 12 23:50:08.924008 kernel: clocksource: Switched to clocksource arch_sys_counter May 12 23:50:08.924015 kernel: VFS: Disk quotas dquot_6.6.0 May 12 23:50:08.924022 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 23:50:08.924030 kernel: pnp: PnP ACPI init May 12 23:50:08.924100 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 12 23:50:08.924112 kernel: pnp: PnP ACPI: found 1 devices May 12 23:50:08.924119 kernel: NET: Registered PF_INET protocol family May 12 23:50:08.924127 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 23:50:08.924134 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 23:50:08.924142 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 23:50:08.924149 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 23:50:08.924156 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 23:50:08.924164 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 23:50:08.924171 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 23:50:08.924179 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 23:50:08.924187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 23:50:08.924194 kernel: PCI: CLS 0 bytes, default 64 May 12 23:50:08.924201 kernel: kvm [1]: HYP mode not available May 12 23:50:08.924208 kernel: Initialise system trusted keyrings May 12 23:50:08.924215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 23:50:08.924222 kernel: Key type asymmetric registered May 12 23:50:08.924229 kernel: Asymmetric key parser 'x509' registered May 12 23:50:08.924236 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 12 23:50:08.924245 kernel: io scheduler mq-deadline registered May 12 23:50:08.924252 kernel: io scheduler kyber registered May 12 23:50:08.924259 kernel: io scheduler bfq registered May 12 23:50:08.924267 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 12 23:50:08.924274 kernel: ACPI: button: Power Button [PWRB] May 12 23:50:08.924281 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 12 23:50:08.924346 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 12 23:50:08.924356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 23:50:08.924367 kernel: thunder_xcv, ver 1.0 May 12 23:50:08.924376 kernel: thunder_bgx, ver 1.0 May 12 23:50:08.924383 kernel: nicpf, ver 1.0 May 12 23:50:08.924391 kernel: nicvf, ver 1.0 May 12 23:50:08.924497 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 12 23:50:08.924568 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-12T23:50:08 UTC (1747093808) May 12 23:50:08.924578 kernel: hid: raw HID events driver (C) Jiri Kosina May 12 23:50:08.924585 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 12 23:50:08.924598 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 12 23:50:08.924609 kernel: watchdog: Hard watchdog permanently disabled May 12 23:50:08.924618 kernel: NET: Registered PF_INET6 protocol family May 12 23:50:08.924626 kernel: Segment Routing with IPv6 May 12 23:50:08.924633 kernel: In-situ OAM (IOAM) with IPv6 May 12 23:50:08.924640 kernel: NET: Registered PF_PACKET protocol family May 12 23:50:08.924647 kernel: Key type dns_resolver registered May 12 23:50:08.924654 kernel: registered taskstats version 1 May 12 23:50:08.924662 kernel: Loading compiled-in X.509 certificates May 12 23:50:08.924669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f172f0fb4eac06c214e4b9ce0f39d6c4075ccc9a' May 12 23:50:08.924678 kernel: Key type .fscrypt registered May 12 23:50:08.924685 kernel: Key type fscrypt-provisioning registered May 12 23:50:08.924692 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 23:50:08.924699 kernel: ima: Allocated hash algorithm: sha1 May 12 23:50:08.924707 kernel: ima: No architecture policies found May 12 23:50:08.924714 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 12 23:50:08.924721 kernel: clk: Disabling unused clocks May 12 23:50:08.924728 kernel: Freeing unused kernel memory: 39744K May 12 23:50:08.924735 kernel: Run /init as init process May 12 23:50:08.924743 kernel: with arguments: May 12 23:50:08.924750 kernel: /init May 12 23:50:08.924757 kernel: with environment: May 12 23:50:08.924764 kernel: HOME=/ May 12 23:50:08.924771 kernel: TERM=linux May 12 23:50:08.924778 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 23:50:08.924787 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 12 23:50:08.924796 systemd[1]: Detected virtualization kvm. May 12 23:50:08.924805 systemd[1]: Detected architecture arm64. May 12 23:50:08.924818 systemd[1]: Running in initrd. May 12 23:50:08.924827 systemd[1]: No hostname configured, using default hostname. May 12 23:50:08.924834 systemd[1]: Hostname set to . May 12 23:50:08.924842 systemd[1]: Initializing machine ID from VM UUID. May 12 23:50:08.924850 systemd[1]: Queued start job for default target initrd.target. May 12 23:50:08.924858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:50:08.924865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:50:08.924876 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 23:50:08.924884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:50:08.924892 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 23:50:08.924900 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 23:50:08.924909 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 23:50:08.924917 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 23:50:08.924927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:50:08.924934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:50:08.924942 systemd[1]: Reached target paths.target - Path Units. May 12 23:50:08.924950 systemd[1]: Reached target slices.target - Slice Units. May 12 23:50:08.924957 systemd[1]: Reached target swap.target - Swaps. May 12 23:50:08.924965 systemd[1]: Reached target timers.target - Timer Units. May 12 23:50:08.924975 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:50:08.924983 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:50:08.924991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 23:50:08.925000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 12 23:50:08.925008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:50:08.925016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:50:08.925024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:50:08.925031 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:50:08.925039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 23:50:08.925047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:50:08.925054 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 23:50:08.925062 systemd[1]: Starting systemd-fsck-usr.service... May 12 23:50:08.925071 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:50:08.925079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:50:08.925087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:50:08.925094 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 23:50:08.925102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:50:08.925110 systemd[1]: Finished systemd-fsck-usr.service. May 12 23:50:08.925120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:50:08.925128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:50:08.925154 systemd-journald[239]: Collecting audit messages is disabled. May 12 23:50:08.925175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:50:08.925184 systemd-journald[239]: Journal started May 12 23:50:08.925203 systemd-journald[239]: Runtime Journal (/run/log/journal/86d08b4d70034b969e9baafb3c748601) is 5.9M, max 47.3M, 41.4M free. May 12 23:50:08.917031 systemd-modules-load[240]: Inserted module 'overlay' May 12 23:50:08.927525 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:50:08.927880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:50:08.930096 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 23:50:08.931480 kernel: Bridge firewalling registered May 12 23:50:08.931413 systemd-modules-load[240]: Inserted module 'br_netfilter' May 12 23:50:08.932685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:50:08.935330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:50:08.936745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:50:08.941061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:50:08.949500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:50:08.950732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:50:08.953633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:50:08.954788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:50:08.966627 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 23:50:08.968667 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:50:08.977138 dracut-cmdline[276]: dracut-dracut-053 May 12 23:50:08.979624 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 12 23:50:08.999065 systemd-resolved[278]: Positive Trust Anchors: May 12 23:50:08.999143 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:50:08.999176 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:50:09.003831 systemd-resolved[278]: Defaulting to hostname 'linux'. May 12 23:50:09.004849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:50:09.006774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:50:09.056487 kernel: SCSI subsystem initialized May 12 23:50:09.061480 kernel: Loading iSCSI transport class v2.0-870. May 12 23:50:09.068492 kernel: iscsi: registered transport (tcp) May 12 23:50:09.081476 kernel: iscsi: registered transport (qla4xxx) May 12 23:50:09.081494 kernel: QLogic iSCSI HBA Driver May 12 23:50:09.124360 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 23:50:09.133605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 23:50:09.152064 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 23:50:09.152109 kernel: device-mapper: uevent: version 1.0.3 May 12 23:50:09.152120 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 23:50:09.200487 kernel: raid6: neonx8 gen() 15600 MB/s May 12 23:50:09.217471 kernel: raid6: neonx4 gen() 15518 MB/s May 12 23:50:09.234475 kernel: raid6: neonx2 gen() 13157 MB/s May 12 23:50:09.251470 kernel: raid6: neonx1 gen() 10415 MB/s May 12 23:50:09.268469 kernel: raid6: int64x8 gen() 6947 MB/s May 12 23:50:09.285468 kernel: raid6: int64x4 gen() 7300 MB/s May 12 23:50:09.302471 kernel: raid6: int64x2 gen() 6124 MB/s May 12 23:50:09.319483 kernel: raid6: int64x1 gen() 5043 MB/s May 12 23:50:09.319513 kernel: raid6: using algorithm neonx8 gen() 15600 MB/s May 12 23:50:09.336472 kernel: raid6: .... xor() 11868 MB/s, rmw enabled May 12 23:50:09.336487 kernel: raid6: using neon recovery algorithm May 12 23:50:09.341475 kernel: xor: measuring software checksum speed May 12 23:50:09.341488 kernel: 8regs : 19816 MB/sec May 12 23:50:09.342966 kernel: 32regs : 17677 MB/sec May 12 23:50:09.342982 kernel: arm64_neon : 26238 MB/sec May 12 23:50:09.342997 kernel: xor: using function: arm64_neon (26238 MB/sec) May 12 23:50:09.394485 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 23:50:09.405329 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 23:50:09.418651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:50:09.429571 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 12 23:50:09.432641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:50:09.445645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 23:50:09.456504 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 12 23:50:09.483010 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:50:09.492604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:50:09.533189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:50:09.542639 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 23:50:09.556633 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 23:50:09.558011 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:50:09.559071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:50:09.560908 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:50:09.568630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 23:50:09.573966 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 12 23:50:09.582925 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 23:50:09.578514 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 23:50:09.588544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 23:50:09.588583 kernel: GPT:9289727 != 19775487 May 12 23:50:09.588594 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 23:50:09.589831 kernel: GPT:9289727 != 19775487 May 12 23:50:09.589855 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 23:50:09.589865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:50:09.593907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:50:09.594028 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:50:09.596749 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:50:09.597710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:50:09.597864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:50:09.599683 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:50:09.607735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:50:09.616007 kernel: BTRFS: device fsid 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (504) May 12 23:50:09.617240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:50:09.620723 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) May 12 23:50:09.622454 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 23:50:09.633168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 23:50:09.636815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 23:50:09.637771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 23:50:09.642841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 23:50:09.657669 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 23:50:09.659358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:50:09.664161 disk-uuid[549]: Primary Header is updated. May 12 23:50:09.664161 disk-uuid[549]: Secondary Entries is updated. May 12 23:50:09.664161 disk-uuid[549]: Secondary Header is updated. May 12 23:50:09.667486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:50:09.683183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:50:10.680510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:50:10.680970 disk-uuid[551]: The operation has completed successfully. May 12 23:50:10.712580 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 23:50:10.712685 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 23:50:10.724755 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 23:50:10.730330 sh[571]: Success May 12 23:50:10.757736 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 12 23:50:10.810162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 23:50:10.812292 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 23:50:10.813115 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 23:50:10.824147 kernel: BTRFS info (device dm-0): first mount of filesystem 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a May 12 23:50:10.824185 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 12 23:50:10.824196 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 23:50:10.825076 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 23:50:10.826470 kernel: BTRFS info (device dm-0): using free space tree May 12 23:50:10.829748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 23:50:10.831009 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 23:50:10.831781 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 23:50:10.834075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 23:50:10.844829 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:50:10.844881 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:50:10.844893 kernel: BTRFS info (device vda6): using free space tree May 12 23:50:10.847480 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:50:10.854198 systemd[1]: mnt-oem.mount: Deactivated successfully. May 12 23:50:10.855608 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:50:10.865369 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 23:50:10.875660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 23:50:10.938043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:50:10.947658 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:50:10.973133 systemd-networkd[762]: lo: Link UP May 12 23:50:10.973147 systemd-networkd[762]: lo: Gained carrier May 12 23:50:10.973981 systemd-networkd[762]: Enumeration completed May 12 23:50:10.974686 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:50:10.974689 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 23:50:10.975432 systemd-networkd[762]: eth0: Link UP May 12 23:50:10.975435 systemd-networkd[762]: eth0: Gained carrier May 12 23:50:10.975441 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:50:10.976887 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:50:10.977792 systemd[1]: Reached target network.target - Network. May 12 23:50:10.986070 ignition[669]: Ignition 2.20.0 May 12 23:50:10.986079 ignition[669]: Stage: fetch-offline May 12 23:50:10.986124 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 12 23:50:10.987516 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 23:50:10.986133 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:10.986295 ignition[669]: parsed url from cmdline: "" May 12 23:50:10.986298 ignition[669]: no config URL provided May 12 23:50:10.986303 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 12 23:50:10.986312 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 12 23:50:10.986339 ignition[669]: op(1): [started] loading QEMU firmware config module May 12 23:50:10.986343 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 23:50:11.001902 ignition[669]: op(1): [finished] loading QEMU firmware config module May 12 23:50:11.023500 ignition[669]: parsing config with SHA512: 5688409fd294483dc5afff4b27ac412f99b01b6e182bffe719abc04ea65da3ae54a75c10355b2198c2cfe22613a75707f8ed00e01c9667a83f7832ad9b922d49 May 12 23:50:11.029659 unknown[669]: fetched base config from "system" May 12 23:50:11.029675 unknown[669]: fetched user config from "qemu" May 12 23:50:11.030374 ignition[669]: fetch-offline: fetch-offline passed May 12 23:50:11.030507 ignition[669]: Ignition finished successfully May 12 23:50:11.031998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:50:11.033185 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 23:50:11.045626 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 23:50:11.056043 ignition[774]: Ignition 2.20.0 May 12 23:50:11.056054 ignition[774]: Stage: kargs May 12 23:50:11.056224 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 12 23:50:11.056233 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:11.057176 ignition[774]: kargs: kargs passed May 12 23:50:11.059888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 23:50:11.057222 ignition[774]: Ignition finished successfully May 12 23:50:11.070615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 23:50:11.080259 ignition[782]: Ignition 2.20.0 May 12 23:50:11.080270 ignition[782]: Stage: disks May 12 23:50:11.080432 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 12 23:50:11.080442 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:11.082864 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 23:50:11.081502 ignition[782]: disks: disks passed May 12 23:50:11.084271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 23:50:11.081547 ignition[782]: Ignition finished successfully May 12 23:50:11.085489 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 23:50:11.086653 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:50:11.088354 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:50:11.090007 systemd[1]: Reached target basic.target - Basic System. May 12 23:50:11.098601 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 23:50:11.109569 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 12 23:50:11.113014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 23:50:11.125591 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 23:50:11.166481 kernel: EXT4-fs (vda9): mounted filesystem 267e1a87-2243-4e28-a518-ba9876b017ec r/w with ordered data mode. Quota mode: none. May 12 23:50:11.166569 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 23:50:11.167862 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 23:50:11.184560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:50:11.186415 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 23:50:11.187714 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 23:50:11.187809 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 23:50:11.187892 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:50:11.194266 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) May 12 23:50:11.194173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 23:50:11.196020 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 23:50:11.199576 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:50:11.199596 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:50:11.199605 kernel: BTRFS info (device vda6): using free space tree May 12 23:50:11.201478 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:50:11.203238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:50:11.251771 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory May 12 23:50:11.254906 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory May 12 23:50:11.257959 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory May 12 23:50:11.261048 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory May 12 23:50:11.334863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 23:50:11.347565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 23:50:11.348990 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 23:50:11.354490 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:50:11.371531 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 23:50:11.380737 ignition[917]: INFO : Ignition 2.20.0 May 12 23:50:11.380737 ignition[917]: INFO : Stage: mount May 12 23:50:11.382021 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:50:11.382021 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:11.382021 ignition[917]: INFO : mount: mount passed May 12 23:50:11.382021 ignition[917]: INFO : Ignition finished successfully May 12 23:50:11.383718 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 23:50:11.395597 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 23:50:11.823355 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 23:50:11.833615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:50:11.839483 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) May 12 23:50:11.839518 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:50:11.840937 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:50:11.840960 kernel: BTRFS info (device vda6): using free space tree May 12 23:50:11.843475 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:50:11.844301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:50:11.860120 ignition[946]: INFO : Ignition 2.20.0 May 12 23:50:11.860120 ignition[946]: INFO : Stage: files May 12 23:50:11.861372 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:50:11.861372 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:11.861372 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 12 23:50:11.863956 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 23:50:11.863956 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 23:50:11.866461 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 23:50:11.867444 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 23:50:11.867444 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 23:50:11.867003 unknown[946]: wrote ssh authorized keys file for user: core May 12 23:50:11.870187 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 12 23:50:11.870187 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 12 23:50:11.870187 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 23:50:11.870187 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 12 23:50:11.909181 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 12 23:50:12.033756 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 23:50:12.033756 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 23:50:12.036859 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 12 23:50:12.361649 systemd-networkd[762]: eth0: Gained IPv6LL May 12 23:50:12.383845 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 12 23:50:12.857155 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 23:50:12.857155 ignition[946]: INFO : files: op(c): [started] processing unit "containerd.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 12 23:50:12.859894 ignition[946]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 12 23:50:12.859894 ignition[946]: INFO : files: op(c): [finished] processing unit "containerd.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 12 23:50:12.859894 ignition[946]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 12 23:50:12.886846 ignition[946]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:50:12.890947 ignition[946]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:50:12.892190 ignition[946]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 12 23:50:12.892190 ignition[946]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 12 23:50:12.892190 ignition[946]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 12 23:50:12.892190 ignition[946]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 23:50:12.892190 ignition[946]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 23:50:12.892190 ignition[946]: INFO : files: files passed May 12 23:50:12.892190 ignition[946]: INFO : Ignition finished successfully May 12 23:50:12.894529 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 23:50:12.901630 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 23:50:12.904629 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 23:50:12.905996 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 23:50:12.906083 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 23:50:12.912011 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 12 23:50:12.917792 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:50:12.917792 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 23:50:12.920200 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:50:12.920154 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:50:12.921677 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 23:50:12.936679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 23:50:12.958562 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 23:50:12.959417 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 23:50:12.961616 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 23:50:12.963317 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 23:50:12.965270 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 23:50:12.967227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 23:50:12.984839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:50:12.998679 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 23:50:13.007209 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 23:50:13.008208 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:50:13.010083 systemd[1]: Stopped target timers.target - Timer Units. May 12 23:50:13.011746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 23:50:13.011894 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:50:13.014320 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 23:50:13.016216 systemd[1]: Stopped target basic.target - Basic System. May 12 23:50:13.017795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 23:50:13.019337 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:50:13.021207 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 23:50:13.023058 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 23:50:13.024697 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:50:13.026414 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 23:50:13.028304 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 23:50:13.029960 systemd[1]: Stopped target swap.target - Swaps. May 12 23:50:13.031356 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 23:50:13.031493 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 23:50:13.033654 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 23:50:13.035329 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:50:13.037020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 23:50:13.040536 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:50:13.041481 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 23:50:13.041603 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 23:50:13.044448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 23:50:13.044573 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:50:13.046503 systemd[1]: Stopped target paths.target - Path Units. May 12 23:50:13.048049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 23:50:13.048190 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:50:13.050146 systemd[1]: Stopped target slices.target - Slice Units. May 12 23:50:13.051518 systemd[1]: Stopped target sockets.target - Socket Units. May 12 23:50:13.053059 systemd[1]: iscsid.socket: Deactivated successfully. May 12 23:50:13.053152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:50:13.054997 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 23:50:13.055074 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:50:13.056386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 23:50:13.056508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:50:13.058041 systemd[1]: ignition-files.service: Deactivated successfully. May 12 23:50:13.058145 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 23:50:13.070699 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 23:50:13.071481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 23:50:13.071613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:50:13.074245 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 23:50:13.074973 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 23:50:13.075086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:50:13.077060 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 23:50:13.077166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:50:13.082262 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 23:50:13.085382 ignition[1002]: INFO : Ignition 2.20.0 May 12 23:50:13.085382 ignition[1002]: INFO : Stage: umount May 12 23:50:13.085382 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:50:13.085382 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:50:13.085382 ignition[1002]: INFO : umount: umount passed May 12 23:50:13.085382 ignition[1002]: INFO : Ignition finished successfully May 12 23:50:13.082349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 23:50:13.087756 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 23:50:13.087933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 23:50:13.089068 systemd[1]: Stopped target network.target - Network. May 12 23:50:13.091329 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 23:50:13.091398 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 23:50:13.092907 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 23:50:13.092955 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 23:50:13.094311 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 23:50:13.094350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 23:50:13.096262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 23:50:13.096302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 23:50:13.098122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 23:50:13.099597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 23:50:13.101700 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 23:50:13.105491 systemd-networkd[762]: eth0: DHCPv6 lease lost May 12 23:50:13.106764 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 23:50:13.107557 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 23:50:13.109647 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 23:50:13.109767 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 23:50:13.111770 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 23:50:13.111834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 23:50:13.121632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 23:50:13.122345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 23:50:13.122407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:50:13.124242 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 23:50:13.124287 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 23:50:13.125978 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 23:50:13.126024 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 23:50:13.127917 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 23:50:13.127956 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:50:13.129954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:50:13.143091 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 23:50:13.143230 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 23:50:13.144826 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 23:50:13.144922 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 23:50:13.146586 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 23:50:13.146647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 23:50:13.148494 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 23:50:13.148680 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:50:13.151024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 23:50:13.151081 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 23:50:13.152835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 23:50:13.152875 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:50:13.154677 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 23:50:13.154767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 23:50:13.156890 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 23:50:13.156940 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 23:50:13.159173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:50:13.159263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:50:13.170711 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 23:50:13.171859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 23:50:13.171933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:50:13.173715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:50:13.173764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:50:13.179009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 23:50:13.179865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 23:50:13.180957 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 23:50:13.183057 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 23:50:13.197122 systemd[1]: Switching root. May 12 23:50:13.223481 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 12 23:50:13.223534 systemd-journald[239]: Journal stopped May 12 23:50:13.995975 kernel: SELinux: policy capability network_peer_controls=1 May 12 23:50:13.996028 kernel: SELinux: policy capability open_perms=1 May 12 23:50:13.996049 kernel: SELinux: policy capability extended_socket_class=1 May 12 23:50:13.996065 kernel: SELinux: policy capability always_check_network=0 May 12 23:50:13.996075 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 23:50:13.996084 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 23:50:13.996094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 23:50:13.996105 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 23:50:13.996115 kernel: audit: type=1403 audit(1747093813.442:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 23:50:13.996133 systemd[1]: Successfully loaded SELinux policy in 40.950ms. May 12 23:50:13.996154 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.255ms. May 12 23:50:13.996166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 12 23:50:13.996178 systemd[1]: Detected virtualization kvm. May 12 23:50:13.996189 systemd[1]: Detected architecture arm64. May 12 23:50:13.996199 systemd[1]: Detected first boot. May 12 23:50:13.996209 systemd[1]: Initializing machine ID from VM UUID. May 12 23:50:13.996219 zram_generator::config[1067]: No configuration found. May 12 23:50:13.996231 systemd[1]: Populated /etc with preset unit settings. May 12 23:50:13.996244 systemd[1]: Queued start job for default target multi-user.target. May 12 23:50:13.996255 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 23:50:13.996266 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 23:50:13.996277 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 23:50:13.996287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 23:50:13.996297 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 23:50:13.996308 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 23:50:13.996318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 23:50:13.996330 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 23:50:13.996341 systemd[1]: Created slice user.slice - User and Session Slice. May 12 23:50:13.996352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:50:13.996362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:50:13.996373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 23:50:13.996383 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 23:50:13.996394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 23:50:13.996404 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:50:13.996414 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 12 23:50:13.996426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:50:13.996437 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 23:50:13.996447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:50:13.996472 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:50:13.996485 systemd[1]: Reached target slices.target - Slice Units. May 12 23:50:13.996495 systemd[1]: Reached target swap.target - Swaps. May 12 23:50:13.996505 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 23:50:13.996516 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 23:50:13.996528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 23:50:13.996538 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 12 23:50:13.996548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:50:13.996559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:50:13.996569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:50:13.996579 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 23:50:13.996589 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 23:50:13.996599 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 23:50:13.996610 systemd[1]: Mounting media.mount - External Media Directory... May 12 23:50:13.996623 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 23:50:13.996635 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 23:50:13.996645 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 23:50:13.996655 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 23:50:13.996666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:50:13.996676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:50:13.996686 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 23:50:13.996698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:50:13.996708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:50:13.996720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:50:13.996730 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 23:50:13.996740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:50:13.996751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 23:50:13.996761 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 12 23:50:13.996772 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 12 23:50:13.996782 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:50:13.996796 kernel: fuse: init (API version 7.39) May 12 23:50:13.996807 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:50:13.996819 kernel: ACPI: bus type drm_connector registered May 12 23:50:13.996829 kernel: loop: module loaded May 12 23:50:13.996839 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 23:50:13.996849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 23:50:13.996860 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:50:13.996870 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 23:50:13.996880 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 23:50:13.996890 systemd[1]: Mounted media.mount - External Media Directory. May 12 23:50:13.996900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 23:50:13.996930 systemd-journald[1147]: Collecting audit messages is disabled. May 12 23:50:13.996953 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 23:50:13.996968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 23:50:13.996979 systemd-journald[1147]: Journal started May 12 23:50:13.996999 systemd-journald[1147]: Runtime Journal (/run/log/journal/86d08b4d70034b969e9baafb3c748601) is 5.9M, max 47.3M, 41.4M free. May 12 23:50:13.999482 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:50:14.000537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 23:50:14.001708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:50:14.002890 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 23:50:14.003047 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 23:50:14.004218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:50:14.004375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:50:14.005835 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:50:14.005990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:50:14.007259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:50:14.007421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:50:14.008587 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 23:50:14.008739 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 23:50:14.009766 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:50:14.009994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:50:14.011140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:50:14.012326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 23:50:14.013763 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 23:50:14.025365 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 23:50:14.031551 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 23:50:14.033449 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 23:50:14.034252 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 23:50:14.039627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 23:50:14.041960 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 23:50:14.043152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:50:14.044309 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 23:50:14.045513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:50:14.050148 systemd-journald[1147]: Time spent on flushing to /var/log/journal/86d08b4d70034b969e9baafb3c748601 is 15.062ms for 843 entries. May 12 23:50:14.050148 systemd-journald[1147]: System Journal (/var/log/journal/86d08b4d70034b969e9baafb3c748601) is 8.0M, max 195.6M, 187.6M free. May 12 23:50:14.073153 systemd-journald[1147]: Received client request to flush runtime journal. May 12 23:50:14.049632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:50:14.052841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:50:14.055557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:50:14.057008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 23:50:14.058305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 23:50:14.062608 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 23:50:14.065300 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 23:50:14.068667 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 12 23:50:14.076308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 23:50:14.082979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:50:14.084271 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 12 23:50:14.084285 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 12 23:50:14.086344 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 12 23:50:14.087857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:50:14.095585 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 23:50:14.113169 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 23:50:14.119580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:50:14.132208 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 12 23:50:14.132228 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 12 23:50:14.135717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:50:14.490198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 23:50:14.499866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:50:14.519224 systemd-udevd[1227]: Using default interface naming scheme 'v255'. May 12 23:50:14.532303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:50:14.543913 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:50:14.567974 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 12 23:50:14.581478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1233) May 12 23:50:14.585799 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 23:50:14.622870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 23:50:14.630111 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 23:50:14.655555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:50:14.666437 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 12 23:50:14.669137 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 12 23:50:14.699300 systemd-networkd[1237]: lo: Link UP May 12 23:50:14.699307 systemd-networkd[1237]: lo: Gained carrier May 12 23:50:14.700145 systemd-networkd[1237]: Enumeration completed May 12 23:50:14.700282 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:50:14.703816 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:50:14.705612 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:50:14.705616 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 23:50:14.706387 systemd-networkd[1237]: eth0: Link UP May 12 23:50:14.706390 systemd-networkd[1237]: eth0: Gained carrier May 12 23:50:14.706403 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:50:14.706625 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 23:50:14.709428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:50:14.729531 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 23:50:14.738988 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 12 23:50:14.740138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:50:14.749600 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 12 23:50:14.754337 lvm[1273]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:50:14.794960 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 12 23:50:14.796113 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 23:50:14.797059 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 23:50:14.797087 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:50:14.797857 systemd[1]: Reached target machines.target - Containers. May 12 23:50:14.799543 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 12 23:50:14.814653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 23:50:14.816827 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 23:50:14.817646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:50:14.818567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 23:50:14.820597 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 12 23:50:14.824663 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 23:50:14.826502 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 23:50:14.830693 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 23:50:14.838532 kernel: loop0: detected capacity change from 0 to 194096 May 12 23:50:14.841043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 23:50:14.841773 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 12 23:50:14.851493 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 23:50:14.883577 kernel: loop1: detected capacity change from 0 to 113536 May 12 23:50:14.922486 kernel: loop2: detected capacity change from 0 to 116808 May 12 23:50:14.972486 kernel: loop3: detected capacity change from 0 to 194096 May 12 23:50:14.978490 kernel: loop4: detected capacity change from 0 to 113536 May 12 23:50:14.984506 kernel: loop5: detected capacity change from 0 to 116808 May 12 23:50:14.991420 (sd-merge)[1296]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 23:50:14.991841 (sd-merge)[1296]: Merged extensions into '/usr'. May 12 23:50:14.995235 systemd[1]: Reloading requested from client PID 1281 ('systemd-sysext') (unit systemd-sysext.service)... May 12 23:50:14.995253 systemd[1]: Reloading... May 12 23:50:15.034587 zram_generator::config[1326]: No configuration found. May 12 23:50:15.067047 ldconfig[1278]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 23:50:15.139447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:50:15.183069 systemd[1]: Reloading finished in 187 ms. May 12 23:50:15.198440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 23:50:15.199634 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 23:50:15.213641 systemd[1]: Starting ensure-sysext.service... May 12 23:50:15.215759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:50:15.221089 systemd[1]: Reloading requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... May 12 23:50:15.221106 systemd[1]: Reloading... May 12 23:50:15.234824 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 23:50:15.235091 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 23:50:15.235736 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 23:50:15.235962 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. May 12 23:50:15.236029 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. May 12 23:50:15.238246 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:50:15.238259 systemd-tmpfiles[1367]: Skipping /boot May 12 23:50:15.244916 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:50:15.244931 systemd-tmpfiles[1367]: Skipping /boot May 12 23:50:15.263935 zram_generator::config[1393]: No configuration found. May 12 23:50:15.353255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:50:15.396651 systemd[1]: Reloading finished in 175 ms. May 12 23:50:15.413342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:50:15.432931 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:50:15.435205 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 23:50:15.437535 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 23:50:15.441710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:50:15.445694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 23:50:15.448343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:50:15.454707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:50:15.456696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:50:15.464985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:50:15.465989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:50:15.467412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:50:15.467574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:50:15.470833 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:50:15.470972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:50:15.472259 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 23:50:15.474031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:50:15.474234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:50:15.482100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:50:15.486696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:50:15.489728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:50:15.493720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:50:15.494547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:50:15.499390 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 23:50:15.501274 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 23:50:15.502861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 23:50:15.504237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:50:15.504376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:50:15.505994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:50:15.506132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:50:15.507633 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:50:15.507820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:50:15.513450 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 23:50:15.516594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:50:15.521722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:50:15.524250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:50:15.528697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:50:15.530768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:50:15.531740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:50:15.532038 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 23:50:15.533219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:50:15.533689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:50:15.534575 augenrules[1495]: No rules May 12 23:50:15.534949 systemd-resolved[1441]: Positive Trust Anchors: May 12 23:50:15.535023 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:50:15.535054 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:50:15.535651 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:50:15.535805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:50:15.537716 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:50:15.538162 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:50:15.539447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:50:15.539608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:50:15.541132 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:50:15.542430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:50:15.542519 systemd-resolved[1441]: Defaulting to hostname 'linux'. May 12 23:50:15.545919 systemd[1]: Finished ensure-sysext.service. May 12 23:50:15.546811 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:50:15.550954 systemd[1]: Reached target network.target - Network. May 12 23:50:15.551649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:50:15.552484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:50:15.552548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:50:15.566643 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 23:50:15.608428 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 23:50:15.195856 systemd-resolved[1441]: Clock change detected. Flushing caches. May 12 23:50:15.208337 systemd-journald[1147]: Time jumped backwards, rotating. May 12 23:50:15.195891 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 23:50:15.195937 systemd-timesyncd[1514]: Initial clock synchronization to Mon 2025-05-12 23:50:15.195798 UTC. May 12 23:50:15.197566 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:50:15.198435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 23:50:15.199600 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 23:50:15.201443 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 23:50:15.202604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 23:50:15.202627 systemd[1]: Reached target paths.target - Path Units. May 12 23:50:15.203387 systemd[1]: Reached target time-set.target - System Time Set. May 12 23:50:15.204473 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 23:50:15.205609 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 23:50:15.206666 systemd[1]: Reached target timers.target - Timer Units. May 12 23:50:15.208308 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 23:50:15.211069 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 23:50:15.214156 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 23:50:15.219309 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 23:50:15.220402 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:50:15.221387 systemd[1]: Reached target basic.target - Basic System. May 12 23:50:15.222462 systemd[1]: System is tainted: cgroupsv1 May 12 23:50:15.222523 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 23:50:15.222546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 23:50:15.223866 systemd[1]: Starting containerd.service - containerd container runtime... May 12 23:50:15.226159 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 23:50:15.228265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 23:50:15.231439 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 23:50:15.234257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 23:50:15.235478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 23:50:15.239053 jq[1521]: false May 12 23:50:15.242435 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 23:50:15.247687 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 23:50:15.253143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 23:50:15.254681 extend-filesystems[1523]: Found loop3 May 12 23:50:15.254681 extend-filesystems[1523]: Found loop4 May 12 23:50:15.254681 extend-filesystems[1523]: Found loop5 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda May 12 23:50:15.254681 extend-filesystems[1523]: Found vda1 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda2 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda3 May 12 23:50:15.254681 extend-filesystems[1523]: Found usr May 12 23:50:15.254681 extend-filesystems[1523]: Found vda4 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda6 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda7 May 12 23:50:15.254681 extend-filesystems[1523]: Found vda9 May 12 23:50:15.254681 extend-filesystems[1523]: Checking size of /dev/vda9 May 12 23:50:15.257716 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 23:50:15.262928 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 23:50:15.267433 systemd[1]: Starting update-engine.service - Update Engine... May 12 23:50:15.269124 extend-filesystems[1523]: Resized partition /dev/vda9 May 12 23:50:15.271742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1235) May 12 23:50:15.273313 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 23:50:15.273654 dbus-daemon[1520]: [system] SELinux support is enabled May 12 23:50:15.274908 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 23:50:15.278875 extend-filesystems[1546]: resize2fs 1.47.1 (20-May-2024) May 12 23:50:15.283562 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 23:50:15.283818 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 23:50:15.284071 systemd[1]: motdgen.service: Deactivated successfully. May 12 23:50:15.284308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 23:50:15.287279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 23:50:15.287401 jq[1548]: true May 12 23:50:15.287527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 23:50:15.291256 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 23:50:15.311795 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 23:50:15.320770 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 23:50:15.328896 jq[1555]: true May 12 23:50:15.329094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 23:50:15.329118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 23:50:15.334569 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 23:50:15.334569 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 23:50:15.334569 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 23:50:15.337822 tar[1551]: linux-arm64/helm May 12 23:50:15.332427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 23:50:15.343540 update_engine[1543]: I20250512 23:50:15.335537 1543 main.cc:92] Flatcar Update Engine starting May 12 23:50:15.343540 update_engine[1543]: I20250512 23:50:15.339461 1543 update_check_scheduler.cc:74] Next update check in 8m20s May 12 23:50:15.343759 extend-filesystems[1523]: Resized filesystem in /dev/vda9 May 12 23:50:15.332445 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 23:50:15.343765 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 23:50:15.344021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 23:50:15.346587 systemd[1]: Started update-engine.service - Update Engine. May 12 23:50:15.350907 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 23:50:15.352401 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 23:50:15.377516 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (Power Button) May 12 23:50:15.384268 systemd-logind[1539]: New seat seat0. May 12 23:50:15.394618 systemd[1]: Started systemd-logind.service - User Login Management. May 12 23:50:15.413067 bash[1584]: Updated "/home/core/.ssh/authorized_keys" May 12 23:50:15.417862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 23:50:15.421019 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 23:50:15.456153 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 23:50:15.468314 systemd-networkd[1237]: eth0: Gained IPv6LL May 12 23:50:15.472657 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 23:50:15.474827 systemd[1]: Reached target network-online.target - Network is Online. May 12 23:50:15.482709 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 23:50:15.486280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:15.489463 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 23:50:15.512040 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 23:50:15.512495 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 23:50:15.514467 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 23:50:15.550053 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 23:50:15.575261 containerd[1559]: time="2025-05-12T23:50:15.575119176Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 12 23:50:15.605204 containerd[1559]: time="2025-05-12T23:50:15.605132296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.606811016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.606843776Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.606864856Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.607021456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.607044296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.607101856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:50:15.607255 containerd[1559]: time="2025-05-12T23:50:15.607117736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.607586 containerd[1559]: time="2025-05-12T23:50:15.607558096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:50:15.607699 containerd[1559]: time="2025-05-12T23:50:15.607671576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.607781 containerd[1559]: time="2025-05-12T23:50:15.607765536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:50:15.607827 containerd[1559]: time="2025-05-12T23:50:15.607816696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.607957 containerd[1559]: time="2025-05-12T23:50:15.607940816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.608254 containerd[1559]: time="2025-05-12T23:50:15.608232176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 12 23:50:15.608458 containerd[1559]: time="2025-05-12T23:50:15.608438136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:50:15.608528 containerd[1559]: time="2025-05-12T23:50:15.608512656Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 12 23:50:15.608660 containerd[1559]: time="2025-05-12T23:50:15.608643816Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 12 23:50:15.608750 containerd[1559]: time="2025-05-12T23:50:15.608736576Z" level=info msg="metadata content store policy set" policy=shared May 12 23:50:15.611935 containerd[1559]: time="2025-05-12T23:50:15.611910976Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 12 23:50:15.612054 containerd[1559]: time="2025-05-12T23:50:15.612038416Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 12 23:50:15.612107 containerd[1559]: time="2025-05-12T23:50:15.612096576Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 12 23:50:15.612159 containerd[1559]: time="2025-05-12T23:50:15.612148496Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612281736Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612418376Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612743576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612843896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612860696Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612878296Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612890456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612901936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612913416Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612925736Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612939056Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612951616Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612963896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613213 containerd[1559]: time="2025-05-12T23:50:15.612974616Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.612995456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613009656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613024856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613039616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613050656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613063456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613078536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613090816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613107096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613121096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613132456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613146656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613473 containerd[1559]: time="2025-05-12T23:50:15.613158576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613753 containerd[1559]: time="2025-05-12T23:50:15.613730936Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 12 23:50:15.613821 containerd[1559]: time="2025-05-12T23:50:15.613808016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613881 containerd[1559]: time="2025-05-12T23:50:15.613868216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 12 23:50:15.613929 containerd[1559]: time="2025-05-12T23:50:15.613918456Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 12 23:50:15.614143 containerd[1559]: time="2025-05-12T23:50:15.614130696Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 12 23:50:15.614231 containerd[1559]: time="2025-05-12T23:50:15.614213896Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 12 23:50:15.614280 containerd[1559]: time="2025-05-12T23:50:15.614268856Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 12 23:50:15.614386 containerd[1559]: time="2025-05-12T23:50:15.614371976Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 12 23:50:15.614435 containerd[1559]: time="2025-05-12T23:50:15.614423576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 12 23:50:15.614490 containerd[1559]: time="2025-05-12T23:50:15.614478976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 12 23:50:15.614548 containerd[1559]: time="2025-05-12T23:50:15.614536216Z" level=info msg="NRI interface is disabled by configuration." May 12 23:50:15.614612 containerd[1559]: time="2025-05-12T23:50:15.614598976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 12 23:50:15.615012 containerd[1559]: time="2025-05-12T23:50:15.614962096Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 12 23:50:15.615171 containerd[1559]: time="2025-05-12T23:50:15.615155136Z" level=info msg="Connect containerd service" May 12 23:50:15.615281 containerd[1559]: time="2025-05-12T23:50:15.615267216Z" level=info msg="using legacy CRI server" May 12 23:50:15.615326 containerd[1559]: time="2025-05-12T23:50:15.615314536Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 23:50:15.615624 containerd[1559]: time="2025-05-12T23:50:15.615602176Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 12 23:50:15.616294 containerd[1559]: time="2025-05-12T23:50:15.616223336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 23:50:15.616876 containerd[1559]: time="2025-05-12T23:50:15.616855016Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 23:50:15.617022 containerd[1559]: time="2025-05-12T23:50:15.616997576Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 23:50:15.617118 containerd[1559]: time="2025-05-12T23:50:15.617004096Z" level=info msg="Start subscribing containerd event" May 12 23:50:15.617148 containerd[1559]: time="2025-05-12T23:50:15.617132896Z" level=info msg="Start recovering state" May 12 23:50:15.617224 containerd[1559]: time="2025-05-12T23:50:15.617212416Z" level=info msg="Start event monitor" May 12 23:50:15.617317 containerd[1559]: time="2025-05-12T23:50:15.617303536Z" level=info msg="Start snapshots syncer" May 12 23:50:15.617343 containerd[1559]: time="2025-05-12T23:50:15.617318736Z" level=info msg="Start cni network conf syncer for default" May 12 23:50:15.617343 containerd[1559]: time="2025-05-12T23:50:15.617326576Z" level=info msg="Start streaming server" May 12 23:50:15.619324 systemd[1]: Started containerd.service - containerd container runtime. May 12 23:50:15.620171 containerd[1559]: time="2025-05-12T23:50:15.619362696Z" level=info msg="containerd successfully booted in 0.047194s" May 12 23:50:15.745386 tar[1551]: linux-arm64/LICENSE May 12 23:50:15.745590 tar[1551]: linux-arm64/README.md May 12 23:50:15.758295 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 23:50:16.045074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:16.050076 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:50:16.103289 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 23:50:16.122701 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 23:50:16.137439 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 23:50:16.144540 systemd[1]: issuegen.service: Deactivated successfully. May 12 23:50:16.144790 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 23:50:16.148562 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 23:50:16.162399 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 23:50:16.165088 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 23:50:16.167204 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 12 23:50:16.168311 systemd[1]: Reached target getty.target - Login Prompts. May 12 23:50:16.169126 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 23:50:16.170387 systemd[1]: Startup finished in 5.292s (kernel) + 3.185s (userspace) = 8.478s. May 12 23:50:16.594708 kubelet[1637]: E0512 23:50:16.594667 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:50:16.597453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:50:16.597691 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:50:21.203216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 23:50:21.217384 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:43456.service - OpenSSH per-connection server daemon (10.0.0.1:43456). May 12 23:50:21.275503 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 43456 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:50:21.277135 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:50:21.285392 systemd-logind[1539]: New session 1 of user core. May 12 23:50:21.286362 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 23:50:21.298381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 23:50:21.307327 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 23:50:21.309589 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 23:50:21.316010 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 23:50:21.389744 systemd[1678]: Queued start job for default target default.target. May 12 23:50:21.390387 systemd[1678]: Created slice app.slice - User Application Slice. May 12 23:50:21.390522 systemd[1678]: Reached target paths.target - Paths. May 12 23:50:21.390537 systemd[1678]: Reached target timers.target - Timers. May 12 23:50:21.404267 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 23:50:21.409571 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 23:50:21.409623 systemd[1678]: Reached target sockets.target - Sockets. May 12 23:50:21.409635 systemd[1678]: Reached target basic.target - Basic System. May 12 23:50:21.409667 systemd[1678]: Reached target default.target - Main User Target. May 12 23:50:21.409689 systemd[1678]: Startup finished in 88ms. May 12 23:50:21.409981 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 23:50:21.411504 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 23:50:21.466417 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:43458.service - OpenSSH per-connection server daemon (10.0.0.1:43458). May 12 23:50:21.502849 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 43458 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:50:21.503994 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:50:21.508190 systemd-logind[1539]: New session 2 of user core. May 12 23:50:21.518406 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 23:50:21.569213 sshd[1693]: Connection closed by 10.0.0.1 port 43458 May 12 23:50:21.569384 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 12 23:50:21.589510 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:43470.service - OpenSSH per-connection server daemon (10.0.0.1:43470). May 12 23:50:21.589881 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:43458.service: Deactivated successfully. May 12 23:50:21.592264 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. May 12 23:50:21.592305 systemd[1]: session-2.scope: Deactivated successfully. May 12 23:50:21.593569 systemd-logind[1539]: Removed session 2. May 12 23:50:21.625155 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 43470 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:50:21.626264 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:50:21.630419 systemd-logind[1539]: New session 3 of user core. May 12 23:50:21.642528 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 23:50:21.691246 sshd[1701]: Connection closed by 10.0.0.1 port 43470 May 12 23:50:21.691756 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 12 23:50:21.704451 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:43482.service - OpenSSH per-connection server daemon (10.0.0.1:43482). May 12 23:50:21.704847 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:43470.service: Deactivated successfully. May 12 23:50:21.706917 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. May 12 23:50:21.707337 systemd[1]: session-3.scope: Deactivated successfully. May 12 23:50:21.709196 systemd-logind[1539]: Removed session 3. May 12 23:50:21.743549 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 43482 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:50:21.744794 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:50:21.748685 systemd-logind[1539]: New session 4 of user core. May 12 23:50:21.760453 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 23:50:21.816861 sshd[1709]: Connection closed by 10.0.0.1 port 43482 May 12 23:50:21.817200 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 12 23:50:21.832407 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:43496.service - OpenSSH per-connection server daemon (10.0.0.1:43496). May 12 23:50:21.832779 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:43482.service: Deactivated successfully. May 12 23:50:21.835032 systemd[1]: session-4.scope: Deactivated successfully. May 12 23:50:21.835659 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. May 12 23:50:21.836574 systemd-logind[1539]: Removed session 4. May 12 23:50:21.868088 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 43496 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:50:21.869318 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:50:21.873243 systemd-logind[1539]: New session 5 of user core. May 12 23:50:21.885459 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 23:50:21.944802 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 23:50:21.945059 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:50:22.279411 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 23:50:22.279598 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 23:50:22.548877 dockerd[1739]: time="2025-05-12T23:50:22.548745416Z" level=info msg="Starting up" May 12 23:50:22.626054 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1155036916-merged.mount: Deactivated successfully. May 12 23:50:22.792462 dockerd[1739]: time="2025-05-12T23:50:22.792162976Z" level=info msg="Loading containers: start." May 12 23:50:22.929233 kernel: Initializing XFRM netlink socket May 12 23:50:22.993829 systemd-networkd[1237]: docker0: Link UP May 12 23:50:23.030530 dockerd[1739]: time="2025-05-12T23:50:23.030484656Z" level=info msg="Loading containers: done." May 12 23:50:23.051684 dockerd[1739]: time="2025-05-12T23:50:23.051634976Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 23:50:23.051810 dockerd[1739]: time="2025-05-12T23:50:23.051721536Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 12 23:50:23.051835 dockerd[1739]: time="2025-05-12T23:50:23.051814496Z" level=info msg="Daemon has completed initialization" May 12 23:50:23.079105 dockerd[1739]: time="2025-05-12T23:50:23.079047256Z" level=info msg="API listen on /run/docker.sock" May 12 23:50:23.079237 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 23:50:23.724629 containerd[1559]: time="2025-05-12T23:50:23.724588016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 12 23:50:24.438402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609779804.mount: Deactivated successfully. May 12 23:50:25.899161 containerd[1559]: time="2025-05-12T23:50:25.898996096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:25.899734 containerd[1559]: time="2025-05-12T23:50:25.899637016Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 12 23:50:25.900301 containerd[1559]: time="2025-05-12T23:50:25.900251896Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:25.903909 containerd[1559]: time="2025-05-12T23:50:25.903844936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:25.905061 containerd[1559]: time="2025-05-12T23:50:25.904884136Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.18025084s" May 12 23:50:25.905061 containerd[1559]: time="2025-05-12T23:50:25.904924016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 12 23:50:25.923590 containerd[1559]: time="2025-05-12T23:50:25.923548016Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 12 23:50:26.847947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 23:50:26.863352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:26.966348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:26.970199 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:50:27.009559 kubelet[2016]: E0512 23:50:27.009511 2016 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:50:27.012743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:50:27.012914 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:50:27.481743 containerd[1559]: time="2025-05-12T23:50:27.481695536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:27.482981 containerd[1559]: time="2025-05-12T23:50:27.482874216Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 12 23:50:27.485376 containerd[1559]: time="2025-05-12T23:50:27.485349536Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:27.488465 containerd[1559]: time="2025-05-12T23:50:27.488428896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:27.489196 containerd[1559]: time="2025-05-12T23:50:27.489166816Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.56557216s" May 12 23:50:27.489338 containerd[1559]: time="2025-05-12T23:50:27.489255536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 12 23:50:27.506651 containerd[1559]: time="2025-05-12T23:50:27.506619696Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 12 23:50:28.529751 containerd[1559]: time="2025-05-12T23:50:28.529706496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:28.530427 containerd[1559]: time="2025-05-12T23:50:28.530386136Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 12 23:50:28.531330 containerd[1559]: time="2025-05-12T23:50:28.531293816Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:28.534464 containerd[1559]: time="2025-05-12T23:50:28.534429416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:28.536111 containerd[1559]: time="2025-05-12T23:50:28.535927136Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.02927196s" May 12 23:50:28.536111 containerd[1559]: time="2025-05-12T23:50:28.535955976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 12 23:50:28.553298 containerd[1559]: time="2025-05-12T23:50:28.553260536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 12 23:50:29.568139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399403307.mount: Deactivated successfully. May 12 23:50:29.894473 containerd[1559]: time="2025-05-12T23:50:29.894358376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:29.895488 containerd[1559]: time="2025-05-12T23:50:29.895413976Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 12 23:50:29.896330 containerd[1559]: time="2025-05-12T23:50:29.896269536Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:29.898551 containerd[1559]: time="2025-05-12T23:50:29.898504096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:29.899195 containerd[1559]: time="2025-05-12T23:50:29.899119416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.34582s" May 12 23:50:29.899195 containerd[1559]: time="2025-05-12T23:50:29.899150776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 12 23:50:29.916607 containerd[1559]: time="2025-05-12T23:50:29.916561736Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 12 23:50:30.426679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698868703.mount: Deactivated successfully. May 12 23:50:31.053066 containerd[1559]: time="2025-05-12T23:50:31.053009016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.053654 containerd[1559]: time="2025-05-12T23:50:31.053608216Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 12 23:50:31.054372 containerd[1559]: time="2025-05-12T23:50:31.054325336Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.057517 containerd[1559]: time="2025-05-12T23:50:31.057482016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.059788 containerd[1559]: time="2025-05-12T23:50:31.059743256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.14314164s" May 12 23:50:31.059788 containerd[1559]: time="2025-05-12T23:50:31.059778496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 12 23:50:31.077754 containerd[1559]: time="2025-05-12T23:50:31.077716496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 12 23:50:31.537475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070965482.mount: Deactivated successfully. May 12 23:50:31.541399 containerd[1559]: time="2025-05-12T23:50:31.541360096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.542539 containerd[1559]: time="2025-05-12T23:50:31.542502736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 12 23:50:31.543692 containerd[1559]: time="2025-05-12T23:50:31.543667736Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.547014 containerd[1559]: time="2025-05-12T23:50:31.546972696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:31.547693 containerd[1559]: time="2025-05-12T23:50:31.547514856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 469.75952ms" May 12 23:50:31.547693 containerd[1559]: time="2025-05-12T23:50:31.547542576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 12 23:50:31.564872 containerd[1559]: time="2025-05-12T23:50:31.564831096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 12 23:50:32.061818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525131579.mount: Deactivated successfully. May 12 23:50:33.610031 containerd[1559]: time="2025-05-12T23:50:33.609974536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:33.611029 containerd[1559]: time="2025-05-12T23:50:33.610724096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 12 23:50:33.611686 containerd[1559]: time="2025-05-12T23:50:33.611653016Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:33.615136 containerd[1559]: time="2025-05-12T23:50:33.615098776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:50:33.616579 containerd[1559]: time="2025-05-12T23:50:33.616543176Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.05167896s" May 12 23:50:33.616579 containerd[1559]: time="2025-05-12T23:50:33.616579576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 12 23:50:37.233758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 23:50:37.244408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:37.436734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:37.440937 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:50:37.482956 kubelet[2253]: E0512 23:50:37.482888 2253 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:50:37.486690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:50:37.486980 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:50:38.502692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:38.516429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:38.537910 systemd[1]: Reloading requested from client PID 2271 ('systemctl') (unit session-5.scope)... May 12 23:50:38.537924 systemd[1]: Reloading... May 12 23:50:38.610495 zram_generator::config[2314]: No configuration found. May 12 23:50:38.731720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:50:38.784327 systemd[1]: Reloading finished in 246 ms. May 12 23:50:38.828814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 12 23:50:38.828878 systemd[1]: kubelet.service: Failed with result 'signal'. May 12 23:50:38.829132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:38.831875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:38.934871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:38.940095 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:50:38.981954 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:50:38.981954 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:50:38.981954 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:50:38.982341 kubelet[2368]: I0512 23:50:38.982202 2368 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:50:40.402702 kubelet[2368]: I0512 23:50:40.402647 2368 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 23:50:40.402702 kubelet[2368]: I0512 23:50:40.402683 2368 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:50:40.403091 kubelet[2368]: I0512 23:50:40.402882 2368 server.go:927] "Client rotation is on, will bootstrap in background" May 12 23:50:40.441352 kubelet[2368]: E0512 23:50:40.441323 2368 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.441352 kubelet[2368]: I0512 23:50:40.441539 2368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:50:40.452794 kubelet[2368]: I0512 23:50:40.452746 2368 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:50:40.454385 kubelet[2368]: I0512 23:50:40.454319 2368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:50:40.454567 kubelet[2368]: I0512 23:50:40.454379 2368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 23:50:40.454712 kubelet[2368]: I0512 23:50:40.454690 2368 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:50:40.454712 kubelet[2368]: I0512 23:50:40.454705 2368 container_manager_linux.go:301] "Creating device plugin manager" May 12 23:50:40.454977 kubelet[2368]: I0512 23:50:40.454956 2368 state_mem.go:36] "Initialized new in-memory state store" May 12 23:50:40.456071 kubelet[2368]: I0512 23:50:40.456050 2368 kubelet.go:400] "Attempting to sync node with API server" May 12 23:50:40.456104 kubelet[2368]: I0512 23:50:40.456083 2368 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:50:40.457206 kubelet[2368]: I0512 23:50:40.456351 2368 kubelet.go:312] "Adding apiserver pod source" May 12 23:50:40.457206 kubelet[2368]: I0512 23:50:40.456435 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:50:40.457531 kubelet[2368]: W0512 23:50:40.457353 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.457531 kubelet[2368]: E0512 23:50:40.457406 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.457531 kubelet[2368]: W0512 23:50:40.457447 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.457531 kubelet[2368]: E0512 23:50:40.457500 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.457906 kubelet[2368]: I0512 23:50:40.457885 2368 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:50:40.458312 kubelet[2368]: I0512 23:50:40.458301 2368 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:50:40.458426 kubelet[2368]: W0512 23:50:40.458406 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 23:50:40.459509 kubelet[2368]: I0512 23:50:40.459476 2368 server.go:1264] "Started kubelet" May 12 23:50:40.460213 kubelet[2368]: I0512 23:50:40.459977 2368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:50:40.466973 kubelet[2368]: I0512 23:50:40.466004 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:50:40.466973 kubelet[2368]: I0512 23:50:40.466397 2368 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:50:40.466973 kubelet[2368]: I0512 23:50:40.466654 2368 server.go:455] "Adding debug handlers to kubelet server" May 12 23:50:40.466973 kubelet[2368]: I0512 23:50:40.466825 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:50:40.467451 kubelet[2368]: E0512 23:50:40.467264 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eec9765dbb610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 23:50:40.459445776 +0000 UTC m=+1.515887281,LastTimestamp:2025-05-12 23:50:40.459445776 +0000 UTC m=+1.515887281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 23:50:40.467708 kubelet[2368]: I0512 23:50:40.467687 2368 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 23:50:40.467789 kubelet[2368]: I0512 23:50:40.467774 2368 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 23:50:40.469216 kubelet[2368]: I0512 23:50:40.469191 2368 reconciler.go:26] "Reconciler: start to sync state" May 12 23:50:40.469623 kubelet[2368]: W0512 23:50:40.469569 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.469623 kubelet[2368]: E0512 23:50:40.469625 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.469745 kubelet[2368]: E0512 23:50:40.469698 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" May 12 23:50:40.470345 kubelet[2368]: I0512 23:50:40.470312 2368 factory.go:221] Registration of the systemd container factory successfully May 12 23:50:40.470432 kubelet[2368]: I0512 23:50:40.470403 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:50:40.472281 kubelet[2368]: E0512 23:50:40.472255 2368 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 23:50:40.472406 kubelet[2368]: I0512 23:50:40.472384 2368 factory.go:221] Registration of the containerd container factory successfully May 12 23:50:40.484311 kubelet[2368]: I0512 23:50:40.484261 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:50:40.485300 kubelet[2368]: I0512 23:50:40.485274 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:50:40.485487 kubelet[2368]: I0512 23:50:40.485470 2368 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:50:40.485513 kubelet[2368]: I0512 23:50:40.485491 2368 kubelet.go:2337] "Starting kubelet main sync loop" May 12 23:50:40.485570 kubelet[2368]: E0512 23:50:40.485534 2368 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:50:40.490345 kubelet[2368]: W0512 23:50:40.489704 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.490345 kubelet[2368]: E0512 23:50:40.489762 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:40.492385 kubelet[2368]: I0512 23:50:40.492361 2368 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:50:40.492385 kubelet[2368]: I0512 23:50:40.492379 2368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:50:40.492494 kubelet[2368]: I0512 23:50:40.492398 2368 state_mem.go:36] "Initialized new in-memory state store" May 12 23:50:40.556049 kubelet[2368]: I0512 23:50:40.556010 2368 policy_none.go:49] "None policy: Start" May 12 23:50:40.556774 kubelet[2368]: I0512 23:50:40.556751 2368 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:50:40.556834 kubelet[2368]: I0512 23:50:40.556785 2368 state_mem.go:35] "Initializing new in-memory state store" May 12 23:50:40.561753 kubelet[2368]: I0512 23:50:40.561720 2368 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:50:40.561953 kubelet[2368]: I0512 23:50:40.561905 2368 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:50:40.562020 kubelet[2368]: I0512 23:50:40.562007 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:50:40.564696 kubelet[2368]: E0512 23:50:40.564640 2368 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 23:50:40.568658 kubelet[2368]: I0512 23:50:40.568633 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:40.569052 kubelet[2368]: E0512 23:50:40.569019 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 12 23:50:40.586422 kubelet[2368]: I0512 23:50:40.586372 2368 topology_manager.go:215] "Topology Admit Handler" podUID="9208a29a2e0171ee2b727dd42860679b" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 23:50:40.587521 kubelet[2368]: I0512 23:50:40.587490 2368 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 23:50:40.589522 kubelet[2368]: I0512 23:50:40.588752 2368 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 23:50:40.670274 kubelet[2368]: E0512 23:50:40.670150 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" May 12 23:50:40.670361 kubelet[2368]: I0512 23:50:40.670248 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:40.670361 kubelet[2368]: I0512 23:50:40.670333 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:40.670401 kubelet[2368]: I0512 23:50:40.670359 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:40.670401 kubelet[2368]: I0512 23:50:40.670377 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 23:50:40.670401 kubelet[2368]: I0512 23:50:40.670392 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:40.670486 kubelet[2368]: I0512 23:50:40.670408 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:40.670486 kubelet[2368]: I0512 23:50:40.670431 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:40.670486 kubelet[2368]: I0512 23:50:40.670446 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:40.670486 kubelet[2368]: I0512 23:50:40.670462 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:40.771294 kubelet[2368]: I0512 23:50:40.771259 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:40.771629 kubelet[2368]: E0512 23:50:40.771592 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 12 23:50:40.891332 kubelet[2368]: E0512 23:50:40.891298 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:40.892043 containerd[1559]: time="2025-05-12T23:50:40.891996776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9208a29a2e0171ee2b727dd42860679b,Namespace:kube-system,Attempt:0,}" May 12 23:50:40.894145 kubelet[2368]: E0512 23:50:40.894124 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:40.894296 kubelet[2368]: E0512 23:50:40.894267 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:40.894521 containerd[1559]: time="2025-05-12T23:50:40.894477216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 12 23:50:40.894615 containerd[1559]: time="2025-05-12T23:50:40.894585696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 12 23:50:41.071267 kubelet[2368]: E0512 23:50:41.071225 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" May 12 23:50:41.173608 kubelet[2368]: I0512 23:50:41.173582 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:41.173914 kubelet[2368]: E0512 23:50:41.173887 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 12 23:50:41.370982 kubelet[2368]: W0512 23:50:41.370850 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.370982 kubelet[2368]: E0512 23:50:41.370917 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.431587 kubelet[2368]: W0512 23:50:41.431506 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.431587 kubelet[2368]: E0512 23:50:41.431567 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.457644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769795644.mount: Deactivated successfully. May 12 23:50:41.462376 containerd[1559]: time="2025-05-12T23:50:41.462319496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:50:41.464246 containerd[1559]: time="2025-05-12T23:50:41.464188296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 12 23:50:41.468348 containerd[1559]: time="2025-05-12T23:50:41.468304936Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:50:41.470244 containerd[1559]: time="2025-05-12T23:50:41.470203056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:50:41.471131 containerd[1559]: time="2025-05-12T23:50:41.471080936Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:50:41.474817 containerd[1559]: time="2025-05-12T23:50:41.474666936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:50:41.482039 containerd[1559]: time="2025-05-12T23:50:41.481977656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:50:41.482787 containerd[1559]: time="2025-05-12T23:50:41.482718856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:50:41.484265 containerd[1559]: time="2025-05-12T23:50:41.483953776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.87188ms" May 12 23:50:41.488455 containerd[1559]: time="2025-05-12T23:50:41.488396296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.85336ms" May 12 23:50:41.491861 containerd[1559]: time="2025-05-12T23:50:41.491814496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.16344ms" May 12 23:50:41.570924 kubelet[2368]: W0512 23:50:41.568272 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.570924 kubelet[2368]: E0512 23:50:41.568338 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.643872 containerd[1559]: time="2025-05-12T23:50:41.641289856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:50:41.643872 containerd[1559]: time="2025-05-12T23:50:41.641668136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:50:41.643872 containerd[1559]: time="2025-05-12T23:50:41.641700376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.643872 containerd[1559]: time="2025-05-12T23:50:41.642036336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.645791 containerd[1559]: time="2025-05-12T23:50:41.645653736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:50:41.645887 containerd[1559]: time="2025-05-12T23:50:41.645813696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:50:41.645887 containerd[1559]: time="2025-05-12T23:50:41.645831096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.646441 containerd[1559]: time="2025-05-12T23:50:41.646367176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:50:41.646441 containerd[1559]: time="2025-05-12T23:50:41.646422296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:50:41.646441 containerd[1559]: time="2025-05-12T23:50:41.646439656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.646614 containerd[1559]: time="2025-05-12T23:50:41.646559816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.647893 containerd[1559]: time="2025-05-12T23:50:41.645949736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:50:41.708081 containerd[1559]: time="2025-05-12T23:50:41.707875456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d9dd00827454fd7176fcd1965656518709295b883996c8c352ab0bfeebb2079\"" May 12 23:50:41.708355 containerd[1559]: time="2025-05-12T23:50:41.708319336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9208a29a2e0171ee2b727dd42860679b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a74200c6bb687353dcbe5d32e863635fa89e219628f47b1a8e0c4f3f33dd5f24\"" May 12 23:50:41.708814 containerd[1559]: time="2025-05-12T23:50:41.708782616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa573a18ec61562111c22388f8e0c6d5acca135e0e137c4f8beb621ad8c00624\"" May 12 23:50:41.709953 kubelet[2368]: E0512 23:50:41.709389 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:41.709953 kubelet[2368]: E0512 23:50:41.709395 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:41.710380 kubelet[2368]: E0512 23:50:41.710221 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:41.712547 containerd[1559]: time="2025-05-12T23:50:41.712515536Z" level=info msg="CreateContainer within sandbox \"fa573a18ec61562111c22388f8e0c6d5acca135e0e137c4f8beb621ad8c00624\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 23:50:41.712837 containerd[1559]: time="2025-05-12T23:50:41.712514896Z" level=info msg="CreateContainer within sandbox \"4d9dd00827454fd7176fcd1965656518709295b883996c8c352ab0bfeebb2079\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 23:50:41.712997 containerd[1559]: time="2025-05-12T23:50:41.712908176Z" level=info msg="CreateContainer within sandbox \"a74200c6bb687353dcbe5d32e863635fa89e219628f47b1a8e0c4f3f33dd5f24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 23:50:41.734423 containerd[1559]: time="2025-05-12T23:50:41.734368096Z" level=info msg="CreateContainer within sandbox \"a74200c6bb687353dcbe5d32e863635fa89e219628f47b1a8e0c4f3f33dd5f24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0c76f7bcefa32c5ce3b6e3f8c03b9f632232d7eab24cc87b5f663f6cc2afa2e\"" May 12 23:50:41.735120 containerd[1559]: time="2025-05-12T23:50:41.735089736Z" level=info msg="StartContainer for \"e0c76f7bcefa32c5ce3b6e3f8c03b9f632232d7eab24cc87b5f663f6cc2afa2e\"" May 12 23:50:41.735874 containerd[1559]: time="2025-05-12T23:50:41.735822416Z" level=info msg="CreateContainer within sandbox \"4d9dd00827454fd7176fcd1965656518709295b883996c8c352ab0bfeebb2079\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ff03ccf77b9366877e314bd01ce8e4144be697d558dc5aa305125d20c8095b7\"" May 12 23:50:41.736357 containerd[1559]: time="2025-05-12T23:50:41.736329536Z" level=info msg="StartContainer for \"3ff03ccf77b9366877e314bd01ce8e4144be697d558dc5aa305125d20c8095b7\"" May 12 23:50:41.739847 containerd[1559]: time="2025-05-12T23:50:41.739725336Z" level=info msg="CreateContainer within sandbox \"fa573a18ec61562111c22388f8e0c6d5acca135e0e137c4f8beb621ad8c00624\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b01d21e8175c8de540d158a726e42471fb6016cbe0b4cfa43423340662fb188f\"" May 12 23:50:41.741393 containerd[1559]: time="2025-05-12T23:50:41.741361336Z" level=info msg="StartContainer for \"b01d21e8175c8de540d158a726e42471fb6016cbe0b4cfa43423340662fb188f\"" May 12 23:50:41.801815 containerd[1559]: time="2025-05-12T23:50:41.801743616Z" level=info msg="StartContainer for \"3ff03ccf77b9366877e314bd01ce8e4144be697d558dc5aa305125d20c8095b7\" returns successfully" May 12 23:50:41.802343 containerd[1559]: time="2025-05-12T23:50:41.801751456Z" level=info msg="StartContainer for \"e0c76f7bcefa32c5ce3b6e3f8c03b9f632232d7eab24cc87b5f663f6cc2afa2e\" returns successfully" May 12 23:50:41.819396 containerd[1559]: time="2025-05-12T23:50:41.819328336Z" level=info msg="StartContainer for \"b01d21e8175c8de540d158a726e42471fb6016cbe0b4cfa43423340662fb188f\" returns successfully" May 12 23:50:41.871821 kubelet[2368]: E0512 23:50:41.871768 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" May 12 23:50:41.949913 kubelet[2368]: W0512 23:50:41.949759 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.951566 kubelet[2368]: E0512 23:50:41.951515 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 12 23:50:41.977119 kubelet[2368]: I0512 23:50:41.976864 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:41.977334 kubelet[2368]: E0512 23:50:41.977303 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 12 23:50:42.516203 kubelet[2368]: E0512 23:50:42.512075 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:42.525326 kubelet[2368]: E0512 23:50:42.525292 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:42.532469 kubelet[2368]: E0512 23:50:42.532438 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:43.478323 kubelet[2368]: E0512 23:50:43.478246 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 23:50:43.541264 kubelet[2368]: E0512 23:50:43.538272 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:43.579295 kubelet[2368]: I0512 23:50:43.579230 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:43.587709 kubelet[2368]: I0512 23:50:43.587621 2368 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 23:50:43.594932 kubelet[2368]: E0512 23:50:43.594900 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:43.695065 kubelet[2368]: E0512 23:50:43.695019 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:43.795548 kubelet[2368]: E0512 23:50:43.795504 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:43.896245 kubelet[2368]: E0512 23:50:43.896199 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:43.997288 kubelet[2368]: E0512 23:50:43.997247 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:44.098698 kubelet[2368]: E0512 23:50:44.098563 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:44.187001 kubelet[2368]: E0512 23:50:44.186845 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:44.199529 kubelet[2368]: E0512 23:50:44.199488 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:50:44.462975 kubelet[2368]: I0512 23:50:44.461885 2368 apiserver.go:52] "Watching apiserver" May 12 23:50:44.468757 kubelet[2368]: I0512 23:50:44.468728 2368 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 23:50:44.645486 kubelet[2368]: E0512 23:50:44.643945 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:45.074955 kubelet[2368]: E0512 23:50:45.074920 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:45.414456 systemd[1]: Reloading requested from client PID 2654 ('systemctl') (unit session-5.scope)... May 12 23:50:45.414472 systemd[1]: Reloading... May 12 23:50:45.480259 zram_generator::config[2696]: No configuration found. May 12 23:50:45.541958 kubelet[2368]: E0512 23:50:45.541905 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:45.542081 kubelet[2368]: E0512 23:50:45.542062 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:45.694964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:50:45.754706 systemd[1]: Reloading finished in 339 ms. May 12 23:50:45.782275 kubelet[2368]: I0512 23:50:45.782144 2368 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:50:45.782286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:45.794347 systemd[1]: kubelet.service: Deactivated successfully. May 12 23:50:45.794751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:45.802672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:50:45.900399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:50:45.905693 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:50:45.958258 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:50:45.958258 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:50:45.958258 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:50:45.958906 kubelet[2745]: I0512 23:50:45.958529 2745 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:50:45.964236 kubelet[2745]: I0512 23:50:45.963627 2745 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 23:50:45.964236 kubelet[2745]: I0512 23:50:45.963658 2745 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:50:45.964236 kubelet[2745]: I0512 23:50:45.963875 2745 server.go:927] "Client rotation is on, will bootstrap in background" May 12 23:50:45.965286 kubelet[2745]: I0512 23:50:45.965260 2745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 23:50:45.966490 kubelet[2745]: I0512 23:50:45.966449 2745 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:50:45.974017 kubelet[2745]: I0512 23:50:45.973362 2745 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:50:45.974017 kubelet[2745]: I0512 23:50:45.973871 2745 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:50:45.974572 kubelet[2745]: I0512 23:50:45.973900 2745 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 23:50:45.974715 kubelet[2745]: I0512 23:50:45.974698 2745 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:50:45.974766 kubelet[2745]: I0512 23:50:45.974758 2745 container_manager_linux.go:301] "Creating device plugin manager" May 12 23:50:45.974859 kubelet[2745]: I0512 23:50:45.974848 2745 state_mem.go:36] "Initialized new in-memory state store" May 12 23:50:45.975029 kubelet[2745]: I0512 23:50:45.975011 2745 kubelet.go:400] "Attempting to sync node with API server" May 12 23:50:45.975096 kubelet[2745]: I0512 23:50:45.975088 2745 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:50:45.975166 kubelet[2745]: I0512 23:50:45.975158 2745 kubelet.go:312] "Adding apiserver pod source" May 12 23:50:45.975251 kubelet[2745]: I0512 23:50:45.975238 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:50:45.978573 kubelet[2745]: I0512 23:50:45.978550 2745 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:50:45.978883 kubelet[2745]: I0512 23:50:45.978732 2745 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:50:45.979693 kubelet[2745]: I0512 23:50:45.979101 2745 server.go:1264] "Started kubelet" May 12 23:50:45.979693 kubelet[2745]: I0512 23:50:45.979298 2745 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:50:45.979693 kubelet[2745]: I0512 23:50:45.979304 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:50:45.979693 kubelet[2745]: I0512 23:50:45.979573 2745 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.980258 2745 server.go:455] "Adding debug handlers to kubelet server" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.980980 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.981510 2745 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.981615 2745 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.981744 2745 reconciler.go:26] "Reconciler: start to sync state" May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.982823 2745 factory.go:221] Registration of the systemd container factory successfully May 12 23:50:45.982985 kubelet[2745]: I0512 23:50:45.982924 2745 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:50:45.983422 kubelet[2745]: E0512 23:50:45.983206 2745 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 23:50:45.998213 kubelet[2745]: I0512 23:50:45.998021 2745 factory.go:221] Registration of the containerd container factory successfully May 12 23:50:46.003304 kubelet[2745]: I0512 23:50:46.003262 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:50:46.004274 kubelet[2745]: I0512 23:50:46.004248 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:50:46.004330 kubelet[2745]: I0512 23:50:46.004284 2745 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:50:46.004330 kubelet[2745]: I0512 23:50:46.004305 2745 kubelet.go:2337] "Starting kubelet main sync loop" May 12 23:50:46.004405 kubelet[2745]: E0512 23:50:46.004345 2745 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:50:46.043893 kubelet[2745]: I0512 23:50:46.043865 2745 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:50:46.044321 kubelet[2745]: I0512 23:50:46.044005 2745 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:50:46.044321 kubelet[2745]: I0512 23:50:46.044029 2745 state_mem.go:36] "Initialized new in-memory state store" May 12 23:50:46.044321 kubelet[2745]: I0512 23:50:46.044210 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 23:50:46.044321 kubelet[2745]: I0512 23:50:46.044225 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 23:50:46.044321 kubelet[2745]: I0512 23:50:46.044244 2745 policy_none.go:49] "None policy: Start" May 12 23:50:46.044912 kubelet[2745]: I0512 23:50:46.044853 2745 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:50:46.044912 kubelet[2745]: I0512 23:50:46.044879 2745 state_mem.go:35] "Initializing new in-memory state store" May 12 23:50:46.045102 kubelet[2745]: I0512 23:50:46.045086 2745 state_mem.go:75] "Updated machine memory state" May 12 23:50:46.046197 kubelet[2745]: I0512 23:50:46.046165 2745 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:50:46.047106 kubelet[2745]: I0512 23:50:46.046337 2745 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:50:46.047106 kubelet[2745]: I0512 23:50:46.046456 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:50:46.085302 kubelet[2745]: I0512 23:50:46.085264 2745 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 23:50:46.092382 kubelet[2745]: I0512 23:50:46.092348 2745 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 12 23:50:46.092491 kubelet[2745]: I0512 23:50:46.092454 2745 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 23:50:46.104725 kubelet[2745]: I0512 23:50:46.104677 2745 topology_manager.go:215] "Topology Admit Handler" podUID="9208a29a2e0171ee2b727dd42860679b" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 23:50:46.104827 kubelet[2745]: I0512 23:50:46.104790 2745 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 23:50:46.104890 kubelet[2745]: I0512 23:50:46.104833 2745 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 23:50:46.111995 kubelet[2745]: E0512 23:50:46.111859 2745 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 12 23:50:46.112631 kubelet[2745]: E0512 23:50:46.112607 2745 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 23:50:46.182986 kubelet[2745]: I0512 23:50:46.182850 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:46.182986 kubelet[2745]: I0512 23:50:46.182891 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:46.182986 kubelet[2745]: I0512 23:50:46.182918 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:46.182986 kubelet[2745]: I0512 23:50:46.182956 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 23:50:46.182986 kubelet[2745]: I0512 23:50:46.182975 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:46.183214 kubelet[2745]: I0512 23:50:46.183012 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9208a29a2e0171ee2b727dd42860679b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9208a29a2e0171ee2b727dd42860679b\") " pod="kube-system/kube-apiserver-localhost" May 12 23:50:46.183214 kubelet[2745]: I0512 23:50:46.183050 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:46.183214 kubelet[2745]: I0512 23:50:46.183070 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:46.183214 kubelet[2745]: I0512 23:50:46.183086 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:50:46.413489 kubelet[2745]: E0512 23:50:46.413296 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:46.413489 kubelet[2745]: E0512 23:50:46.413398 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:46.413928 kubelet[2745]: E0512 23:50:46.413863 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:46.976021 kubelet[2745]: I0512 23:50:46.975776 2745 apiserver.go:52] "Watching apiserver" May 12 23:50:46.982560 kubelet[2745]: I0512 23:50:46.982522 2745 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 23:50:47.023909 kubelet[2745]: E0512 23:50:47.022494 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:47.023909 kubelet[2745]: E0512 23:50:47.023576 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:47.047889 kubelet[2745]: E0512 23:50:47.047309 2745 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 23:50:47.047889 kubelet[2745]: I0512 23:50:47.047632 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.047617056 podStartE2EDuration="3.047617056s" podCreationTimestamp="2025-05-12 23:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:50:47.047526376 +0000 UTC m=+1.137984401" watchObservedRunningTime="2025-05-12 23:50:47.047617056 +0000 UTC m=+1.138075081" May 12 23:50:47.048042 kubelet[2745]: E0512 23:50:47.047937 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:47.067841 kubelet[2745]: I0512 23:50:47.067772 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.067751616 podStartE2EDuration="2.067751616s" podCreationTimestamp="2025-05-12 23:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:50:47.058422696 +0000 UTC m=+1.148880721" watchObservedRunningTime="2025-05-12 23:50:47.067751616 +0000 UTC m=+1.158209641" May 12 23:50:47.076441 kubelet[2745]: I0512 23:50:47.076370 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.076166056 podStartE2EDuration="1.076166056s" podCreationTimestamp="2025-05-12 23:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:50:47.068367376 +0000 UTC m=+1.158825441" watchObservedRunningTime="2025-05-12 23:50:47.076166056 +0000 UTC m=+1.166624081" May 12 23:50:47.579356 sudo[1718]: pam_unix(sudo:session): session closed for user root May 12 23:50:47.581160 sshd[1717]: Connection closed by 10.0.0.1 port 43496 May 12 23:50:47.581049 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 12 23:50:47.584769 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:43496.service: Deactivated successfully. May 12 23:50:47.586652 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. May 12 23:50:47.586822 systemd[1]: session-5.scope: Deactivated successfully. May 12 23:50:47.587806 systemd-logind[1539]: Removed session 5. May 12 23:50:48.023719 kubelet[2745]: E0512 23:50:48.023397 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:48.863741 kubelet[2745]: E0512 23:50:48.863698 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:49.024755 kubelet[2745]: E0512 23:50:49.024724 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:51.585874 kubelet[2745]: E0512 23:50:51.585781 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:52.029204 kubelet[2745]: E0512 23:50:52.028337 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:54.281273 kubelet[2745]: E0512 23:50:54.281206 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:55.032839 kubelet[2745]: E0512 23:50:55.032775 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:56.034656 kubelet[2745]: E0512 23:50:56.034559 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:50:58.875477 kubelet[2745]: E0512 23:50:58.875406 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:00.245482 kubelet[2745]: I0512 23:51:00.245339 2745 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 23:51:00.245842 containerd[1559]: time="2025-05-12T23:51:00.245807177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 23:51:00.246293 kubelet[2745]: I0512 23:51:00.246047 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 23:51:00.696770 update_engine[1543]: I20250512 23:51:00.696210 1543 update_attempter.cc:509] Updating boot flags... May 12 23:51:00.728249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2818) May 12 23:51:00.757287 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2820) May 12 23:51:00.806214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2820) May 12 23:51:01.087942 kubelet[2745]: I0512 23:51:01.087845 2745 topology_manager.go:215] "Topology Admit Handler" podUID="9b50bb12-9248-4864-83fe-aa6ca1951c67" podNamespace="kube-system" podName="kube-proxy-g4s2x" May 12 23:51:01.091889 kubelet[2745]: I0512 23:51:01.091405 2745 topology_manager.go:215] "Topology Admit Handler" podUID="309aa4cd-e8c8-42ce-81ec-0bacca1fe59f" podNamespace="kube-flannel" podName="kube-flannel-ds-sllzn" May 12 23:51:01.285081 kubelet[2745]: I0512 23:51:01.285033 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b50bb12-9248-4864-83fe-aa6ca1951c67-lib-modules\") pod \"kube-proxy-g4s2x\" (UID: \"9b50bb12-9248-4864-83fe-aa6ca1951c67\") " pod="kube-system/kube-proxy-g4s2x" May 12 23:51:01.285081 kubelet[2745]: I0512 23:51:01.285076 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr692\" (UniqueName: \"kubernetes.io/projected/9b50bb12-9248-4864-83fe-aa6ca1951c67-kube-api-access-jr692\") pod \"kube-proxy-g4s2x\" (UID: \"9b50bb12-9248-4864-83fe-aa6ca1951c67\") " pod="kube-system/kube-proxy-g4s2x" May 12 23:51:01.285639 kubelet[2745]: I0512 23:51:01.285098 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b50bb12-9248-4864-83fe-aa6ca1951c67-kube-proxy\") pod \"kube-proxy-g4s2x\" (UID: \"9b50bb12-9248-4864-83fe-aa6ca1951c67\") " pod="kube-system/kube-proxy-g4s2x" May 12 23:51:01.285639 kubelet[2745]: I0512 23:51:01.285117 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-run\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.285639 kubelet[2745]: I0512 23:51:01.285135 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf6z\" (UniqueName: \"kubernetes.io/projected/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-kube-api-access-hvf6z\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.285639 kubelet[2745]: I0512 23:51:01.285152 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-cni\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.285639 kubelet[2745]: I0512 23:51:01.285171 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b50bb12-9248-4864-83fe-aa6ca1951c67-xtables-lock\") pod \"kube-proxy-g4s2x\" (UID: \"9b50bb12-9248-4864-83fe-aa6ca1951c67\") " pod="kube-system/kube-proxy-g4s2x" May 12 23:51:01.285737 kubelet[2745]: I0512 23:51:01.285203 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-cni-plugin\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.285737 kubelet[2745]: I0512 23:51:01.285218 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-xtables-lock\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.285737 kubelet[2745]: I0512 23:51:01.285234 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/309aa4cd-e8c8-42ce-81ec-0bacca1fe59f-flannel-cfg\") pod \"kube-flannel-ds-sllzn\" (UID: \"309aa4cd-e8c8-42ce-81ec-0bacca1fe59f\") " pod="kube-flannel/kube-flannel-ds-sllzn" May 12 23:51:01.697760 kubelet[2745]: E0512 23:51:01.697652 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:01.698583 containerd[1559]: time="2025-05-12T23:51:01.698194287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4s2x,Uid:9b50bb12-9248-4864-83fe-aa6ca1951c67,Namespace:kube-system,Attempt:0,}" May 12 23:51:01.699406 kubelet[2745]: E0512 23:51:01.699386 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:01.699763 containerd[1559]: time="2025-05-12T23:51:01.699729622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sllzn,Uid:309aa4cd-e8c8-42ce-81ec-0bacca1fe59f,Namespace:kube-flannel,Attempt:0,}" May 12 23:51:01.726554 containerd[1559]: time="2025-05-12T23:51:01.726452882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:51:01.726786 containerd[1559]: time="2025-05-12T23:51:01.726730685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:51:01.726818 containerd[1559]: time="2025-05-12T23:51:01.726801086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:51:01.726839 containerd[1559]: time="2025-05-12T23:51:01.726817006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:01.727140 containerd[1559]: time="2025-05-12T23:51:01.726969567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:51:01.727140 containerd[1559]: time="2025-05-12T23:51:01.726990848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:01.727140 containerd[1559]: time="2025-05-12T23:51:01.727082609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:01.727760 containerd[1559]: time="2025-05-12T23:51:01.727719615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:01.764935 containerd[1559]: time="2025-05-12T23:51:01.764881098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4s2x,Uid:9b50bb12-9248-4864-83fe-aa6ca1951c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"400be8e8bda3aaa9e759a0d01b725efa4fb6bc92bc23046ed6ca1f85ad11aec7\"" May 12 23:51:01.765819 kubelet[2745]: E0512 23:51:01.765742 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:01.770819 containerd[1559]: time="2025-05-12T23:51:01.770699034Z" level=info msg="CreateContainer within sandbox \"400be8e8bda3aaa9e759a0d01b725efa4fb6bc92bc23046ed6ca1f85ad11aec7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 23:51:01.775186 containerd[1559]: time="2025-05-12T23:51:01.775103797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sllzn,Uid:309aa4cd-e8c8-42ce-81ec-0bacca1fe59f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\"" May 12 23:51:01.775814 kubelet[2745]: E0512 23:51:01.775793 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:01.784516 containerd[1559]: time="2025-05-12T23:51:01.784483049Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 12 23:51:01.789636 containerd[1559]: time="2025-05-12T23:51:01.789590819Z" level=info msg="CreateContainer within sandbox \"400be8e8bda3aaa9e759a0d01b725efa4fb6bc92bc23046ed6ca1f85ad11aec7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2bf30ecae0f4e9a5ea07ec7993ff7784be01c312a5cbd56e3f47fbc265bfa2c7\"" May 12 23:51:01.790730 containerd[1559]: time="2025-05-12T23:51:01.790457427Z" level=info msg="StartContainer for \"2bf30ecae0f4e9a5ea07ec7993ff7784be01c312a5cbd56e3f47fbc265bfa2c7\"" May 12 23:51:01.840706 containerd[1559]: time="2025-05-12T23:51:01.840653357Z" level=info msg="StartContainer for \"2bf30ecae0f4e9a5ea07ec7993ff7784be01c312a5cbd56e3f47fbc265bfa2c7\" returns successfully" May 12 23:51:02.046153 kubelet[2745]: E0512 23:51:02.046122 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:02.056407 kubelet[2745]: I0512 23:51:02.056314 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g4s2x" podStartSLOduration=1.056279668 podStartE2EDuration="1.056279668s" podCreationTimestamp="2025-05-12 23:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:51:02.056043186 +0000 UTC m=+16.146501211" watchObservedRunningTime="2025-05-12 23:51:02.056279668 +0000 UTC m=+16.146737693" May 12 23:51:03.015487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121027867.mount: Deactivated successfully. May 12 23:51:03.042196 containerd[1559]: time="2025-05-12T23:51:03.042130626Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:03.043041 containerd[1559]: time="2025-05-12T23:51:03.042765072Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 12 23:51:03.043758 containerd[1559]: time="2025-05-12T23:51:03.043713080Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:03.046709 containerd[1559]: time="2025-05-12T23:51:03.046677665Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:03.047971 containerd[1559]: time="2025-05-12T23:51:03.047942996Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.263261425s" May 12 23:51:03.048028 containerd[1559]: time="2025-05-12T23:51:03.047973317Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 12 23:51:03.050010 containerd[1559]: time="2025-05-12T23:51:03.049980454Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 12 23:51:03.059781 containerd[1559]: time="2025-05-12T23:51:03.059718617Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"4f55cee38b2a50234a8cf2db6f17a34098b8d166bc81788f2f66053a24d34da0\"" May 12 23:51:03.059794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907751273.mount: Deactivated successfully. May 12 23:51:03.060681 containerd[1559]: time="2025-05-12T23:51:03.060594225Z" level=info msg="StartContainer for \"4f55cee38b2a50234a8cf2db6f17a34098b8d166bc81788f2f66053a24d34da0\"" May 12 23:51:03.117563 containerd[1559]: time="2025-05-12T23:51:03.117468313Z" level=info msg="StartContainer for \"4f55cee38b2a50234a8cf2db6f17a34098b8d166bc81788f2f66053a24d34da0\" returns successfully" May 12 23:51:03.155887 containerd[1559]: time="2025-05-12T23:51:03.155794602Z" level=info msg="shim disconnected" id=4f55cee38b2a50234a8cf2db6f17a34098b8d166bc81788f2f66053a24d34da0 namespace=k8s.io May 12 23:51:03.155887 containerd[1559]: time="2025-05-12T23:51:03.155864882Z" level=warning msg="cleaning up after shim disconnected" id=4f55cee38b2a50234a8cf2db6f17a34098b8d166bc81788f2f66053a24d34da0 namespace=k8s.io May 12 23:51:03.156416 containerd[1559]: time="2025-05-12T23:51:03.156236245Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:51:04.050980 kubelet[2745]: E0512 23:51:04.050950 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:04.052357 containerd[1559]: time="2025-05-12T23:51:04.051906423Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 12 23:51:05.260442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982852279.mount: Deactivated successfully. May 12 23:51:06.009425 containerd[1559]: time="2025-05-12T23:51:06.009384456Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:06.010493 containerd[1559]: time="2025-05-12T23:51:06.009973901Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 12 23:51:06.011645 containerd[1559]: time="2025-05-12T23:51:06.011558432Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:06.016206 containerd[1559]: time="2025-05-12T23:51:06.015291018Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:51:06.017356 containerd[1559]: time="2025-05-12T23:51:06.016461306Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.964505963s" May 12 23:51:06.017356 containerd[1559]: time="2025-05-12T23:51:06.016492827Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 12 23:51:06.021795 containerd[1559]: time="2025-05-12T23:51:06.021683063Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 12 23:51:06.034558 containerd[1559]: time="2025-05-12T23:51:06.034493634Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5\"" May 12 23:51:06.035110 containerd[1559]: time="2025-05-12T23:51:06.035084078Z" level=info msg="StartContainer for \"0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5\"" May 12 23:51:06.121515 containerd[1559]: time="2025-05-12T23:51:06.121442049Z" level=info msg="StartContainer for \"0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5\" returns successfully" May 12 23:51:06.147350 containerd[1559]: time="2025-05-12T23:51:06.147260391Z" level=info msg="shim disconnected" id=0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5 namespace=k8s.io May 12 23:51:06.147350 containerd[1559]: time="2025-05-12T23:51:06.147313072Z" level=warning msg="cleaning up after shim disconnected" id=0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5 namespace=k8s.io May 12 23:51:06.147350 containerd[1559]: time="2025-05-12T23:51:06.147321432Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:51:06.156020 kubelet[2745]: I0512 23:51:06.155952 2745 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 12 23:51:06.158100 containerd[1559]: time="2025-05-12T23:51:06.158061748Z" level=warning msg="cleanup warnings time=\"2025-05-12T23:51:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 12 23:51:06.177067 kubelet[2745]: I0512 23:51:06.177023 2745 topology_manager.go:215] "Topology Admit Handler" podUID="8b8c5d78-88e0-4a99-a524-3b8105f894f9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qhh8h" May 12 23:51:06.180085 kubelet[2745]: I0512 23:51:06.177208 2745 topology_manager.go:215] "Topology Admit Handler" podUID="fece58fb-a797-4f1c-8141-c4ef824790ba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5fwrf" May 12 23:51:06.180408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7dcfbd1a01ccb49b9bd2aa1af2dfa1f296945f01461d284eb8b11c41195dc5-rootfs.mount: Deactivated successfully. May 12 23:51:06.318526 kubelet[2745]: I0512 23:51:06.318416 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b8c5d78-88e0-4a99-a524-3b8105f894f9-config-volume\") pod \"coredns-7db6d8ff4d-qhh8h\" (UID: \"8b8c5d78-88e0-4a99-a524-3b8105f894f9\") " pod="kube-system/coredns-7db6d8ff4d-qhh8h" May 12 23:51:06.318526 kubelet[2745]: I0512 23:51:06.318459 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnc4t\" (UniqueName: \"kubernetes.io/projected/8b8c5d78-88e0-4a99-a524-3b8105f894f9-kube-api-access-qnc4t\") pod \"coredns-7db6d8ff4d-qhh8h\" (UID: \"8b8c5d78-88e0-4a99-a524-3b8105f894f9\") " pod="kube-system/coredns-7db6d8ff4d-qhh8h" May 12 23:51:06.318526 kubelet[2745]: I0512 23:51:06.318488 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fece58fb-a797-4f1c-8141-c4ef824790ba-config-volume\") pod \"coredns-7db6d8ff4d-5fwrf\" (UID: \"fece58fb-a797-4f1c-8141-c4ef824790ba\") " pod="kube-system/coredns-7db6d8ff4d-5fwrf" May 12 23:51:06.318526 kubelet[2745]: I0512 23:51:06.318508 2745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4zl\" (UniqueName: \"kubernetes.io/projected/fece58fb-a797-4f1c-8141-c4ef824790ba-kube-api-access-ds4zl\") pod \"coredns-7db6d8ff4d-5fwrf\" (UID: \"fece58fb-a797-4f1c-8141-c4ef824790ba\") " pod="kube-system/coredns-7db6d8ff4d-5fwrf" May 12 23:51:06.481003 kubelet[2745]: E0512 23:51:06.480970 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:06.481674 containerd[1559]: time="2025-05-12T23:51:06.481634875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhh8h,Uid:8b8c5d78-88e0-4a99-a524-3b8105f894f9,Namespace:kube-system,Attempt:0,}" May 12 23:51:06.485217 kubelet[2745]: E0512 23:51:06.484386 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:06.485329 containerd[1559]: time="2025-05-12T23:51:06.484821258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5fwrf,Uid:fece58fb-a797-4f1c-8141-c4ef824790ba,Namespace:kube-system,Attempt:0,}" May 12 23:51:06.550578 containerd[1559]: time="2025-05-12T23:51:06.550371641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhh8h,Uid:8b8c5d78-88e0-4a99-a524-3b8105f894f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:51:06.550943 kubelet[2745]: E0512 23:51:06.550900 2745 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:51:06.551020 kubelet[2745]: E0512 23:51:06.550973 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-qhh8h" May 12 23:51:06.551020 kubelet[2745]: E0512 23:51:06.550992 2745 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-qhh8h" May 12 23:51:06.551069 kubelet[2745]: E0512 23:51:06.551034 2745 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhh8h_kube-system(8b8c5d78-88e0-4a99-a524-3b8105f894f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhh8h_kube-system(8b8c5d78-88e0-4a99-a524-3b8105f894f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-qhh8h" podUID="8b8c5d78-88e0-4a99-a524-3b8105f894f9" May 12 23:51:06.551613 containerd[1559]: time="2025-05-12T23:51:06.551478889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5fwrf,Uid:fece58fb-a797-4f1c-8141-c4ef824790ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54135dbda9d88e406844942849f04d2dbcdd037862cbd3e46e53bbb440e7a96b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:51:06.551730 kubelet[2745]: E0512 23:51:06.551685 2745 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54135dbda9d88e406844942849f04d2dbcdd037862cbd3e46e53bbb440e7a96b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:51:06.551730 kubelet[2745]: E0512 23:51:06.551723 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54135dbda9d88e406844942849f04d2dbcdd037862cbd3e46e53bbb440e7a96b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5fwrf" May 12 23:51:06.551784 kubelet[2745]: E0512 23:51:06.551739 2745 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54135dbda9d88e406844942849f04d2dbcdd037862cbd3e46e53bbb440e7a96b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5fwrf" May 12 23:51:06.551784 kubelet[2745]: E0512 23:51:06.551766 2745 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5fwrf_kube-system(fece58fb-a797-4f1c-8141-c4ef824790ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5fwrf_kube-system(fece58fb-a797-4f1c-8141-c4ef824790ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54135dbda9d88e406844942849f04d2dbcdd037862cbd3e46e53bbb440e7a96b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-5fwrf" podUID="fece58fb-a797-4f1c-8141-c4ef824790ba" May 12 23:51:07.064694 kubelet[2745]: E0512 23:51:07.064651 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:07.068037 containerd[1559]: time="2025-05-12T23:51:07.067869150Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 12 23:51:07.086157 containerd[1559]: time="2025-05-12T23:51:07.086065270Z" level=info msg="CreateContainer within sandbox \"0386400b8a2338918ce147875aae91c11cd69e67d00861c5b5b311c08c0b4e21\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"3534fa5e7eb77d88236e425f2a7974f0b5d159ea8cee446b686811b95bf38e7f\"" May 12 23:51:07.086589 containerd[1559]: time="2025-05-12T23:51:07.086544673Z" level=info msg="StartContainer for \"3534fa5e7eb77d88236e425f2a7974f0b5d159ea8cee446b686811b95bf38e7f\"" May 12 23:51:07.136581 containerd[1559]: time="2025-05-12T23:51:07.136525325Z" level=info msg="StartContainer for \"3534fa5e7eb77d88236e425f2a7974f0b5d159ea8cee446b686811b95bf38e7f\" returns successfully" May 12 23:51:07.180801 systemd[1]: run-netns-cni\x2d7e6f8572\x2d34f3\x2db874\x2d1708\x2d040658ad25a0.mount: Deactivated successfully. May 12 23:51:07.180953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff17b600249ad30623c6254b4938c4442a8b09171a9ced293ca3ae47d5356215-shm.mount: Deactivated successfully. May 12 23:51:08.069336 kubelet[2745]: E0512 23:51:08.069300 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:08.233562 systemd-networkd[1237]: flannel.1: Link UP May 12 23:51:08.233571 systemd-networkd[1237]: flannel.1: Gained carrier May 12 23:51:09.070327 kubelet[2745]: E0512 23:51:09.070292 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:09.548335 systemd-networkd[1237]: flannel.1: Gained IPv6LL May 12 23:51:11.150482 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:38780.service - OpenSSH per-connection server daemon (10.0.0.1:38780). May 12 23:51:11.188068 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 38780 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:11.189600 sshd-session[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:11.193407 systemd-logind[1539]: New session 6 of user core. May 12 23:51:11.204435 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 23:51:11.322829 sshd[3400]: Connection closed by 10.0.0.1 port 38780 May 12 23:51:11.323436 sshd-session[3397]: pam_unix(sshd:session): session closed for user core May 12 23:51:11.327227 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:38780.service: Deactivated successfully. May 12 23:51:11.329379 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. May 12 23:51:11.329442 systemd[1]: session-6.scope: Deactivated successfully. May 12 23:51:11.330973 systemd-logind[1539]: Removed session 6. May 12 23:51:16.338489 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:52856.service - OpenSSH per-connection server daemon (10.0.0.1:52856). May 12 23:51:16.375812 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 52856 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:16.377095 sshd-session[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:16.384193 systemd-logind[1539]: New session 7 of user core. May 12 23:51:16.396480 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 23:51:16.515575 sshd[3437]: Connection closed by 10.0.0.1 port 52856 May 12 23:51:16.516693 sshd-session[3434]: pam_unix(sshd:session): session closed for user core May 12 23:51:16.522909 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. May 12 23:51:16.524667 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:52856.service: Deactivated successfully. May 12 23:51:16.527067 systemd[1]: session-7.scope: Deactivated successfully. May 12 23:51:16.528437 systemd-logind[1539]: Removed session 7. May 12 23:51:17.005608 kubelet[2745]: E0512 23:51:17.005562 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:17.006022 containerd[1559]: time="2025-05-12T23:51:17.005984151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhh8h,Uid:8b8c5d78-88e0-4a99-a524-3b8105f894f9,Namespace:kube-system,Attempt:0,}" May 12 23:51:17.049121 systemd-networkd[1237]: cni0: Link UP May 12 23:51:17.049127 systemd-networkd[1237]: cni0: Gained carrier May 12 23:51:17.053761 systemd-networkd[1237]: cni0: Lost carrier May 12 23:51:17.054907 systemd-networkd[1237]: veth8435424a: Link UP May 12 23:51:17.056232 kernel: cni0: port 1(veth8435424a) entered blocking state May 12 23:51:17.056288 kernel: cni0: port 1(veth8435424a) entered disabled state May 12 23:51:17.056304 kernel: veth8435424a: entered allmulticast mode May 12 23:51:17.057397 kernel: veth8435424a: entered promiscuous mode May 12 23:51:17.058479 kernel: cni0: port 1(veth8435424a) entered blocking state May 12 23:51:17.058523 kernel: cni0: port 1(veth8435424a) entered forwarding state May 12 23:51:17.059297 kernel: cni0: port 1(veth8435424a) entered disabled state May 12 23:51:17.072112 systemd-networkd[1237]: veth8435424a: Gained carrier May 12 23:51:17.072298 kernel: cni0: port 1(veth8435424a) entered blocking state May 12 23:51:17.072327 kernel: cni0: port 1(veth8435424a) entered forwarding state May 12 23:51:17.072380 systemd-networkd[1237]: cni0: Gained carrier May 12 23:51:17.074301 containerd[1559]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 12 23:51:17.074301 containerd[1559]: delegateAdd: netconf sent to delegate plugin: May 12 23:51:17.091796 containerd[1559]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-12T23:51:17.091702449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:51:17.091796 containerd[1559]: time="2025-05-12T23:51:17.091766049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:51:17.091796 containerd[1559]: time="2025-05-12T23:51:17.091778009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:17.092859 containerd[1559]: time="2025-05-12T23:51:17.091862210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:17.111853 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:51:17.127999 containerd[1559]: time="2025-05-12T23:51:17.127965015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhh8h,Uid:8b8c5d78-88e0-4a99-a524-3b8105f894f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ae0596443d6dcce79de8c1b89d69f73c7a9ebadf64a855e51f8e8750d817b3\"" May 12 23:51:17.128895 kubelet[2745]: E0512 23:51:17.128872 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:17.130872 containerd[1559]: time="2025-05-12T23:51:17.130836545Z" level=info msg="CreateContainer within sandbox \"41ae0596443d6dcce79de8c1b89d69f73c7a9ebadf64a855e51f8e8750d817b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:51:17.156711 containerd[1559]: time="2025-05-12T23:51:17.156592315Z" level=info msg="CreateContainer within sandbox \"41ae0596443d6dcce79de8c1b89d69f73c7a9ebadf64a855e51f8e8750d817b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df7ff327a967a45e5fece12f446c718ad49f6ceadbeaaa2c14fb0413f711982f\"" May 12 23:51:17.157311 containerd[1559]: time="2025-05-12T23:51:17.157258837Z" level=info msg="StartContainer for \"df7ff327a967a45e5fece12f446c718ad49f6ceadbeaaa2c14fb0413f711982f\"" May 12 23:51:17.202989 containerd[1559]: time="2025-05-12T23:51:17.202885836Z" level=info msg="StartContainer for \"df7ff327a967a45e5fece12f446c718ad49f6ceadbeaaa2c14fb0413f711982f\" returns successfully" May 12 23:51:18.088828 kubelet[2745]: E0512 23:51:18.088531 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:18.101455 kubelet[2745]: I0512 23:51:18.100945 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sllzn" podStartSLOduration=12.860617639 podStartE2EDuration="17.100930655s" podCreationTimestamp="2025-05-12 23:51:01 +0000 UTC" firstStartedPulling="2025-05-12 23:51:01.777086057 +0000 UTC m=+15.867544082" lastFinishedPulling="2025-05-12 23:51:06.017399113 +0000 UTC m=+20.107857098" observedRunningTime="2025-05-12 23:51:08.077780091 +0000 UTC m=+22.168238116" watchObservedRunningTime="2025-05-12 23:51:18.100930655 +0000 UTC m=+32.191388680" May 12 23:51:18.101455 kubelet[2745]: I0512 23:51:18.101057 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qhh8h" podStartSLOduration=17.101052656 podStartE2EDuration="17.101052656s" podCreationTimestamp="2025-05-12 23:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:51:18.100557094 +0000 UTC m=+32.191015079" watchObservedRunningTime="2025-05-12 23:51:18.101052656 +0000 UTC m=+32.191510721" May 12 23:51:18.828383 systemd-networkd[1237]: veth8435424a: Gained IPv6LL May 12 23:51:18.892503 systemd-networkd[1237]: cni0: Gained IPv6LL May 12 23:51:19.005067 kubelet[2745]: E0512 23:51:19.005003 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:19.005821 containerd[1559]: time="2025-05-12T23:51:19.005699283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5fwrf,Uid:fece58fb-a797-4f1c-8141-c4ef824790ba,Namespace:kube-system,Attempt:0,}" May 12 23:51:19.032788 systemd-networkd[1237]: veth792f90d2: Link UP May 12 23:51:19.034430 kernel: cni0: port 2(veth792f90d2) entered blocking state May 12 23:51:19.034517 kernel: cni0: port 2(veth792f90d2) entered disabled state May 12 23:51:19.034542 kernel: veth792f90d2: entered allmulticast mode May 12 23:51:19.034569 kernel: veth792f90d2: entered promiscuous mode May 12 23:51:19.047283 kernel: cni0: port 2(veth792f90d2) entered blocking state May 12 23:51:19.047379 kernel: cni0: port 2(veth792f90d2) entered forwarding state May 12 23:51:19.047955 systemd-networkd[1237]: veth792f90d2: Gained carrier May 12 23:51:19.049619 containerd[1559]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 12 23:51:19.049619 containerd[1559]: delegateAdd: netconf sent to delegate plugin: May 12 23:51:19.073443 containerd[1559]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-12T23:51:19.073352970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:51:19.073602 containerd[1559]: time="2025-05-12T23:51:19.073406970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:51:19.073602 containerd[1559]: time="2025-05-12T23:51:19.073484290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:19.074122 containerd[1559]: time="2025-05-12T23:51:19.073972332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:51:19.086883 systemd[1]: run-containerd-runc-k8s.io-c95b16d33247ec95006f207b5cfb1ca9db02aa6b79f8014d06f03843e44e2d47-runc.VmuNGp.mount: Deactivated successfully. May 12 23:51:19.089749 kubelet[2745]: E0512 23:51:19.089723 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:19.098057 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:51:19.114678 containerd[1559]: time="2025-05-12T23:51:19.114637016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5fwrf,Uid:fece58fb-a797-4f1c-8141-c4ef824790ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c95b16d33247ec95006f207b5cfb1ca9db02aa6b79f8014d06f03843e44e2d47\"" May 12 23:51:19.115348 kubelet[2745]: E0512 23:51:19.115326 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:19.116996 containerd[1559]: time="2025-05-12T23:51:19.116966023Z" level=info msg="CreateContainer within sandbox \"c95b16d33247ec95006f207b5cfb1ca9db02aa6b79f8014d06f03843e44e2d47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:51:19.180346 containerd[1559]: time="2025-05-12T23:51:19.180293696Z" level=info msg="CreateContainer within sandbox \"c95b16d33247ec95006f207b5cfb1ca9db02aa6b79f8014d06f03843e44e2d47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0eac79b3d6d8749bc7b6c21c3b2b845236b339743bd5c54f6e70faeae7bbc1f\"" May 12 23:51:19.180794 containerd[1559]: time="2025-05-12T23:51:19.180759978Z" level=info msg="StartContainer for \"a0eac79b3d6d8749bc7b6c21c3b2b845236b339743bd5c54f6e70faeae7bbc1f\"" May 12 23:51:19.225789 containerd[1559]: time="2025-05-12T23:51:19.225704435Z" level=info msg="StartContainer for \"a0eac79b3d6d8749bc7b6c21c3b2b845236b339743bd5c54f6e70faeae7bbc1f\" returns successfully" May 12 23:51:20.093105 kubelet[2745]: E0512 23:51:20.092741 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:20.104025 kubelet[2745]: I0512 23:51:20.103948 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5fwrf" podStartSLOduration=19.103910738 podStartE2EDuration="19.103910738s" podCreationTimestamp="2025-05-12 23:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:51:20.103661058 +0000 UTC m=+34.194119083" watchObservedRunningTime="2025-05-12 23:51:20.103910738 +0000 UTC m=+34.194368763" May 12 23:51:20.173277 systemd-networkd[1237]: veth792f90d2: Gained IPv6LL May 12 23:51:21.094557 kubelet[2745]: E0512 23:51:21.094528 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:21.530449 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). May 12 23:51:21.570098 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:21.571590 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:21.575475 systemd-logind[1539]: New session 8 of user core. May 12 23:51:21.584437 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 23:51:21.698257 sshd[3709]: Connection closed by 10.0.0.1 port 52862 May 12 23:51:21.698660 sshd-session[3706]: pam_unix(sshd:session): session closed for user core May 12 23:51:21.706462 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:52872.service - OpenSSH per-connection server daemon (10.0.0.1:52872). May 12 23:51:21.706886 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:52862.service: Deactivated successfully. May 12 23:51:21.709232 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. May 12 23:51:21.709375 systemd[1]: session-8.scope: Deactivated successfully. May 12 23:51:21.711481 systemd-logind[1539]: Removed session 8. May 12 23:51:21.744786 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 52872 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:21.746304 sshd-session[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:21.751353 systemd-logind[1539]: New session 9 of user core. May 12 23:51:21.763536 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 23:51:21.915017 sshd[3726]: Connection closed by 10.0.0.1 port 52872 May 12 23:51:21.920256 sshd-session[3720]: pam_unix(sshd:session): session closed for user core May 12 23:51:21.938944 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:52872.service: Deactivated successfully. May 12 23:51:21.941758 systemd[1]: session-9.scope: Deactivated successfully. May 12 23:51:21.948263 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. May 12 23:51:21.956511 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:52888.service - OpenSSH per-connection server daemon (10.0.0.1:52888). May 12 23:51:21.958757 systemd-logind[1539]: Removed session 9. May 12 23:51:21.993905 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 52888 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:21.995167 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:21.999255 systemd-logind[1539]: New session 10 of user core. May 12 23:51:22.011501 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 23:51:22.096770 kubelet[2745]: E0512 23:51:22.096693 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:51:22.124019 sshd[3739]: Connection closed by 10.0.0.1 port 52888 May 12 23:51:22.124395 sshd-session[3736]: pam_unix(sshd:session): session closed for user core May 12 23:51:22.127761 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:52888.service: Deactivated successfully. May 12 23:51:22.130532 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. May 12 23:51:22.130831 systemd[1]: session-10.scope: Deactivated successfully. May 12 23:51:22.132104 systemd-logind[1539]: Removed session 10. May 12 23:51:27.137405 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:41338.service - OpenSSH per-connection server daemon (10.0.0.1:41338). May 12 23:51:27.177079 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 41338 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:27.177031 sshd-session[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:27.183003 systemd-logind[1539]: New session 11 of user core. May 12 23:51:27.198474 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 23:51:27.305915 sshd[3775]: Connection closed by 10.0.0.1 port 41338 May 12 23:51:27.306342 sshd-session[3772]: pam_unix(sshd:session): session closed for user core May 12 23:51:27.320416 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). May 12 23:51:27.320899 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:41338.service: Deactivated successfully. May 12 23:51:27.322371 systemd[1]: session-11.scope: Deactivated successfully. May 12 23:51:27.324777 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. May 12 23:51:27.325879 systemd-logind[1539]: Removed session 11. May 12 23:51:27.365169 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:27.366312 sshd-session[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:27.370716 systemd-logind[1539]: New session 12 of user core. May 12 23:51:27.377531 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 23:51:27.588715 sshd[3790]: Connection closed by 10.0.0.1 port 41342 May 12 23:51:27.589047 sshd-session[3784]: pam_unix(sshd:session): session closed for user core May 12 23:51:27.592818 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. May 12 23:51:27.593611 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:41342.service: Deactivated successfully. May 12 23:51:27.597907 systemd[1]: session-12.scope: Deactivated successfully. May 12 23:51:27.625479 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:41356.service - OpenSSH per-connection server daemon (10.0.0.1:41356). May 12 23:51:27.628702 systemd-logind[1539]: Removed session 12. May 12 23:51:27.663549 sshd[3800]: Accepted publickey for core from 10.0.0.1 port 41356 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:27.664697 sshd-session[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:27.668933 systemd-logind[1539]: New session 13 of user core. May 12 23:51:27.678444 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 23:51:28.878638 sshd[3803]: Connection closed by 10.0.0.1 port 41356 May 12 23:51:28.880158 sshd-session[3800]: pam_unix(sshd:session): session closed for user core May 12 23:51:28.892493 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). May 12 23:51:28.894720 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:41356.service: Deactivated successfully. May 12 23:51:28.898886 systemd[1]: session-13.scope: Deactivated successfully. May 12 23:51:28.900946 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. May 12 23:51:28.904641 systemd-logind[1539]: Removed session 13. May 12 23:51:28.933776 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:28.935016 sshd-session[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:28.939029 systemd-logind[1539]: New session 14 of user core. May 12 23:51:28.949495 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 23:51:29.151057 sshd[3847]: Connection closed by 10.0.0.1 port 41362 May 12 23:51:29.151361 sshd-session[3841]: pam_unix(sshd:session): session closed for user core May 12 23:51:29.160859 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:41378.service - OpenSSH per-connection server daemon (10.0.0.1:41378). May 12 23:51:29.161369 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:41362.service: Deactivated successfully. May 12 23:51:29.162944 systemd[1]: session-14.scope: Deactivated successfully. May 12 23:51:29.167063 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. May 12 23:51:29.169074 systemd-logind[1539]: Removed session 14. May 12 23:51:29.196412 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 41378 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:29.197653 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:29.201854 systemd-logind[1539]: New session 15 of user core. May 12 23:51:29.207457 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 23:51:29.316959 sshd[3861]: Connection closed by 10.0.0.1 port 41378 May 12 23:51:29.317391 sshd-session[3855]: pam_unix(sshd:session): session closed for user core May 12 23:51:29.320736 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:41378.service: Deactivated successfully. May 12 23:51:29.322619 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. May 12 23:51:29.322682 systemd[1]: session-15.scope: Deactivated successfully. May 12 23:51:29.323652 systemd-logind[1539]: Removed session 15. May 12 23:51:34.332508 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). May 12 23:51:34.369606 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:34.370916 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:34.374748 systemd-logind[1539]: New session 16 of user core. May 12 23:51:34.385541 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 23:51:34.496582 sshd[3903]: Connection closed by 10.0.0.1 port 60538 May 12 23:51:34.496909 sshd-session[3900]: pam_unix(sshd:session): session closed for user core May 12 23:51:34.499700 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:60538.service: Deactivated successfully. May 12 23:51:34.502538 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. May 12 23:51:34.502741 systemd[1]: session-16.scope: Deactivated successfully. May 12 23:51:34.504062 systemd-logind[1539]: Removed session 16. May 12 23:51:39.511727 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). May 12 23:51:39.549615 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:39.550918 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:39.554820 systemd-logind[1539]: New session 17 of user core. May 12 23:51:39.570493 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 23:51:39.681571 sshd[3939]: Connection closed by 10.0.0.1 port 60558 May 12 23:51:39.681927 sshd-session[3936]: pam_unix(sshd:session): session closed for user core May 12 23:51:39.684681 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. May 12 23:51:39.684925 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:60558.service: Deactivated successfully. May 12 23:51:39.688079 systemd[1]: session-17.scope: Deactivated successfully. May 12 23:51:39.688886 systemd-logind[1539]: Removed session 17. May 12 23:51:44.700580 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:33524.service - OpenSSH per-connection server daemon (10.0.0.1:33524). May 12 23:51:44.739237 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 33524 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:51:44.739904 sshd-session[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:51:44.744786 systemd-logind[1539]: New session 18 of user core. May 12 23:51:44.750504 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 23:51:44.874324 sshd[3976]: Connection closed by 10.0.0.1 port 33524 May 12 23:51:44.874705 sshd-session[3973]: pam_unix(sshd:session): session closed for user core May 12 23:51:44.878126 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:33524.service: Deactivated successfully. May 12 23:51:44.881228 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. May 12 23:51:44.881718 systemd[1]: session-18.scope: Deactivated successfully. May 12 23:51:44.883957 systemd-logind[1539]: Removed session 18.