May 9 23:54:28.945425 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:54:28.945447 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 9 23:54:28.945457 kernel: KASLR enabled May 9 23:54:28.945463 kernel: efi: EFI v2.7 by EDK II May 9 23:54:28.945468 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 23:54:28.945474 kernel: random: crng init done May 9 23:54:28.945480 kernel: secureboot: Secure boot disabled May 9 23:54:28.945486 kernel: ACPI: Early table checksum verification disabled May 9 23:54:28.945492 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:54:28.945500 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:54:28.945506 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945512 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945518 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945524 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945531 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945539 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945545 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945551 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945576 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:54:28.945583 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:54:28.945589 kernel: NUMA: Failed to initialise from firmware May 9 23:54:28.945595 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:54:28.945602 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 9 23:54:28.945608 kernel: Zone ranges: May 9 23:54:28.945614 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:54:28.945622 kernel: DMA32 empty May 9 23:54:28.945628 kernel: Normal empty May 9 23:54:28.945634 kernel: Movable zone start for each node May 9 23:54:28.945640 kernel: Early memory node ranges May 9 23:54:28.945646 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 23:54:28.945652 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:54:28.945658 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:54:28.945666 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:54:28.945676 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:54:28.945682 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:54:28.945688 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:54:28.945694 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:54:28.945702 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:54:28.945708 kernel: psci: probing for conduit method from ACPI. May 9 23:54:28.945714 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:54:28.945723 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:54:28.945730 kernel: psci: Trusted OS migration not required May 9 23:54:28.945736 kernel: psci: SMC Calling Convention v1.1 May 9 23:54:28.945744 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:54:28.945751 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:54:28.945758 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:54:28.945765 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:54:28.945772 kernel: Detected PIPT I-cache on CPU0 May 9 23:54:28.945778 kernel: CPU features: detected: GIC system register CPU interface May 9 23:54:28.945785 kernel: CPU features: detected: Hardware dirty bit management May 9 23:54:28.945791 kernel: CPU features: detected: Spectre-v4 May 9 23:54:28.945798 kernel: CPU features: detected: Spectre-BHB May 9 23:54:28.945804 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:54:28.945812 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:54:28.945819 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:54:28.945826 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:54:28.945833 kernel: alternatives: applying boot alternatives May 9 23:54:28.945841 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:54:28.945848 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:54:28.945855 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:54:28.945862 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:54:28.945869 kernel: Fallback order for Node 0: 0 May 9 23:54:28.945875 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:54:28.945882 kernel: Policy zone: DMA May 9 23:54:28.945904 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:54:28.945911 kernel: software IO TLB: area num 4. May 9 23:54:28.945918 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:54:28.945926 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 9 23:54:28.945932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:54:28.945939 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:54:28.945946 kernel: rcu: RCU event tracing is enabled. May 9 23:54:28.945953 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:54:28.945959 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:54:28.945966 kernel: Tracing variant of Tasks RCU enabled. May 9 23:54:28.945973 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:54:28.945998 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:54:28.946006 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:54:28.946013 kernel: GICv3: 256 SPIs implemented May 9 23:54:28.946020 kernel: GICv3: 0 Extended SPIs implemented May 9 23:54:28.946027 kernel: Root IRQ handler: gic_handle_irq May 9 23:54:28.946033 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:54:28.946040 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:54:28.946047 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:54:28.946054 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:54:28.946061 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:54:28.946067 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:54:28.946074 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:54:28.946082 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:54:28.946089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:54:28.946096 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:54:28.946103 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:54:28.946110 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:54:28.946117 kernel: arm-pv: using stolen time PV May 9 23:54:28.946123 kernel: Console: colour dummy device 80x25 May 9 23:54:28.946130 kernel: ACPI: Core revision 20230628 May 9 23:54:28.946137 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:54:28.946144 kernel: pid_max: default: 32768 minimum: 301 May 9 23:54:28.946153 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:54:28.946160 kernel: landlock: Up and running. May 9 23:54:28.946166 kernel: SELinux: Initializing. May 9 23:54:28.946173 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:54:28.946180 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:54:28.946187 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:54:28.946194 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:54:28.946201 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:54:28.946208 kernel: rcu: Hierarchical SRCU implementation. May 9 23:54:28.946216 kernel: rcu: Max phase no-delay instances is 400. May 9 23:54:28.946223 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:54:28.946229 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:54:28.946243 kernel: Remapping and enabling EFI services. May 9 23:54:28.946250 kernel: smp: Bringing up secondary CPUs ... May 9 23:54:28.946257 kernel: Detected PIPT I-cache on CPU1 May 9 23:54:28.946264 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:54:28.946270 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:54:28.946277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:54:28.946284 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:54:28.946304 kernel: Detected PIPT I-cache on CPU2 May 9 23:54:28.946311 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:54:28.946323 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:54:28.946332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:54:28.946339 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:54:28.946346 kernel: Detected PIPT I-cache on CPU3 May 9 23:54:28.946353 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:54:28.946360 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:54:28.946367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:54:28.946375 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:54:28.946383 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:54:28.946390 kernel: SMP: Total of 4 processors activated. May 9 23:54:28.946397 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:54:28.946404 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:54:28.946412 kernel: CPU features: detected: Common not Private translations May 9 23:54:28.946419 kernel: CPU features: detected: CRC32 instructions May 9 23:54:28.946426 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:54:28.946434 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:54:28.946441 kernel: CPU features: detected: LSE atomic instructions May 9 23:54:28.946448 kernel: CPU features: detected: Privileged Access Never May 9 23:54:28.946455 kernel: CPU features: detected: RAS Extension Support May 9 23:54:28.946463 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:54:28.946470 kernel: CPU: All CPU(s) started at EL1 May 9 23:54:28.946477 kernel: alternatives: applying system-wide alternatives May 9 23:54:28.946484 kernel: devtmpfs: initialized May 9 23:54:28.946492 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:54:28.946500 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:54:28.946508 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:54:28.946515 kernel: SMBIOS 3.0.0 present. May 9 23:54:28.946522 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:54:28.946529 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:54:28.946536 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:54:28.946543 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:54:28.946550 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:54:28.946557 kernel: audit: initializing netlink subsys (disabled) May 9 23:54:28.946566 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 23:54:28.946573 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:54:28.946580 kernel: cpuidle: using governor menu May 9 23:54:28.946587 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:54:28.946594 kernel: ASID allocator initialised with 32768 entries May 9 23:54:28.946602 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:54:28.946609 kernel: Serial: AMBA PL011 UART driver May 9 23:54:28.946616 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:54:28.946623 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:54:28.946631 kernel: Modules: 508944 pages in range for PLT usage May 9 23:54:28.946639 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:54:28.946646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:54:28.946653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:54:28.946660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:54:28.946667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:54:28.946674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:54:28.946681 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:54:28.946688 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:54:28.946697 kernel: ACPI: Added _OSI(Module Device) May 9 23:54:28.946704 kernel: ACPI: Added _OSI(Processor Device) May 9 23:54:28.946711 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:54:28.946718 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:54:28.946725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:54:28.946732 kernel: ACPI: Interpreter enabled May 9 23:54:28.946739 kernel: ACPI: Using GIC for interrupt routing May 9 23:54:28.946746 kernel: ACPI: MCFG table detected, 1 entries May 9 23:54:28.946754 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:54:28.946762 kernel: printk: console [ttyAMA0] enabled May 9 23:54:28.946770 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:54:28.946913 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:54:28.947000 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:54:28.947068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:54:28.947135 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:54:28.947199 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:54:28.947211 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:54:28.947218 kernel: PCI host bridge to bus 0000:00 May 9 23:54:28.947301 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:54:28.947365 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:54:28.947423 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:54:28.947479 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:54:28.947559 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:54:28.947648 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:54:28.947726 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:54:28.947795 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:54:28.947861 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:54:28.947925 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:54:28.948000 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:54:28.948072 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:54:28.948134 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:54:28.948192 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:54:28.948256 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:54:28.948265 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:54:28.948273 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:54:28.948280 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:54:28.948287 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:54:28.948294 kernel: iommu: Default domain type: Translated May 9 23:54:28.948304 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:54:28.948311 kernel: efivars: Registered efivars operations May 9 23:54:28.948318 kernel: vgaarb: loaded May 9 23:54:28.948325 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:54:28.948332 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:54:28.948339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:54:28.948346 kernel: pnp: PnP ACPI init May 9 23:54:28.948417 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:54:28.948429 kernel: pnp: PnP ACPI: found 1 devices May 9 23:54:28.948436 kernel: NET: Registered PF_INET protocol family May 9 23:54:28.948444 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:54:28.948451 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:54:28.948458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:54:28.948466 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:54:28.948473 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:54:28.948480 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:54:28.948488 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:54:28.948496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:54:28.948503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:54:28.948511 kernel: PCI: CLS 0 bytes, default 64 May 9 23:54:28.948518 kernel: kvm [1]: HYP mode not available May 9 23:54:28.948525 kernel: Initialise system trusted keyrings May 9 23:54:28.948532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:54:28.948539 kernel: Key type asymmetric registered May 9 23:54:28.948546 kernel: Asymmetric key parser 'x509' registered May 9 23:54:28.948553 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:54:28.948562 kernel: io scheduler mq-deadline registered May 9 23:54:28.948569 kernel: io scheduler kyber registered May 9 23:54:28.948577 kernel: io scheduler bfq registered May 9 23:54:28.948584 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:54:28.948591 kernel: ACPI: button: Power Button [PWRB] May 9 23:54:28.948599 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:54:28.948664 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:54:28.948674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:54:28.948681 kernel: thunder_xcv, ver 1.0 May 9 23:54:28.948690 kernel: thunder_bgx, ver 1.0 May 9 23:54:28.948697 kernel: nicpf, ver 1.0 May 9 23:54:28.948704 kernel: nicvf, ver 1.0 May 9 23:54:28.948777 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:54:28.948838 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:54:28 UTC (1746834868) May 9 23:54:28.948848 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:54:28.948855 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:54:28.948863 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:54:28.948872 kernel: watchdog: Hard watchdog permanently disabled May 9 23:54:28.948879 kernel: NET: Registered PF_INET6 protocol family May 9 23:54:28.948886 kernel: Segment Routing with IPv6 May 9 23:54:28.948893 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:54:28.948901 kernel: NET: Registered PF_PACKET protocol family May 9 23:54:28.948909 kernel: Key type dns_resolver registered May 9 23:54:28.948916 kernel: registered taskstats version 1 May 9 23:54:28.948923 kernel: Loading compiled-in X.509 certificates May 9 23:54:28.948930 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 9 23:54:28.948939 kernel: Key type .fscrypt registered May 9 23:54:28.948946 kernel: Key type fscrypt-provisioning registered May 9 23:54:28.948953 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:54:28.948960 kernel: ima: Allocated hash algorithm: sha1 May 9 23:54:28.948967 kernel: ima: No architecture policies found May 9 23:54:28.948984 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:54:28.948992 kernel: clk: Disabling unused clocks May 9 23:54:28.948999 kernel: Freeing unused kernel memory: 39744K May 9 23:54:28.949017 kernel: Run /init as init process May 9 23:54:28.949027 kernel: with arguments: May 9 23:54:28.949035 kernel: /init May 9 23:54:28.949042 kernel: with environment: May 9 23:54:28.949051 kernel: HOME=/ May 9 23:54:28.949073 kernel: TERM=linux May 9 23:54:28.949089 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:54:28.949099 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:54:28.949109 systemd[1]: Detected virtualization kvm. May 9 23:54:28.949119 systemd[1]: Detected architecture arm64. May 9 23:54:28.949127 systemd[1]: Running in initrd. May 9 23:54:28.949134 systemd[1]: No hostname configured, using default hostname. May 9 23:54:28.949144 systemd[1]: Hostname set to . May 9 23:54:28.949152 systemd[1]: Initializing machine ID from VM UUID. May 9 23:54:28.949160 systemd[1]: Queued start job for default target initrd.target. May 9 23:54:28.949168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:54:28.949176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:54:28.949192 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:54:28.949202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:54:28.949214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:54:28.949222 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:54:28.949237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:54:28.949246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:54:28.949254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:54:28.949273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:54:28.949280 systemd[1]: Reached target paths.target - Path Units. May 9 23:54:28.949288 systemd[1]: Reached target slices.target - Slice Units. May 9 23:54:28.949296 systemd[1]: Reached target swap.target - Swaps. May 9 23:54:28.949304 systemd[1]: Reached target timers.target - Timer Units. May 9 23:54:28.949312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:54:28.949319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:54:28.949327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:54:28.949336 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:54:28.949345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:54:28.949352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:54:28.949360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:54:28.949368 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:54:28.949375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:54:28.949383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:54:28.949391 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:54:28.949399 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:54:28.949408 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:54:28.949416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:54:28.949424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:54:28.949432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:54:28.949440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:54:28.949447 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:54:28.949477 systemd-journald[239]: Collecting audit messages is disabled. May 9 23:54:28.949496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:54:28.949506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:54:28.949514 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:54:28.949522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:54:28.949530 systemd-journald[239]: Journal started May 9 23:54:28.949549 systemd-journald[239]: Runtime Journal (/run/log/journal/23fc9b1ba7cd4cf9afeef0e7dc4b92ab) is 5.9M, max 47.3M, 41.4M free. May 9 23:54:28.934291 systemd-modules-load[240]: Inserted module 'overlay' May 9 23:54:28.954875 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 23:54:28.956681 kernel: Bridge firewalling registered May 9 23:54:28.956702 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:54:28.957994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:54:28.966167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:54:28.970115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:54:28.971726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:54:28.975513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:54:28.978669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:54:28.983335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:54:28.986351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:54:29.002169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:54:29.004738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:54:29.008304 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:54:29.021645 dracut-cmdline[280]: dracut-dracut-053 May 9 23:54:29.024520 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:54:29.032888 systemd-resolved[276]: Positive Trust Anchors: May 9 23:54:29.032965 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:54:29.033069 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:54:29.037727 systemd-resolved[276]: Defaulting to hostname 'linux'. May 9 23:54:29.041703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:54:29.044185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:54:29.097012 kernel: SCSI subsystem initialized May 9 23:54:29.101996 kernel: Loading iSCSI transport class v2.0-870. May 9 23:54:29.110010 kernel: iscsi: registered transport (tcp) May 9 23:54:29.123006 kernel: iscsi: registered transport (qla4xxx) May 9 23:54:29.123037 kernel: QLogic iSCSI HBA Driver May 9 23:54:29.166426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:54:29.178146 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:54:29.196327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:54:29.196393 kernel: device-mapper: uevent: version 1.0.3 May 9 23:54:29.198019 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:54:29.244023 kernel: raid6: neonx8 gen() 15739 MB/s May 9 23:54:29.261008 kernel: raid6: neonx4 gen() 15577 MB/s May 9 23:54:29.278002 kernel: raid6: neonx2 gen() 13196 MB/s May 9 23:54:29.294999 kernel: raid6: neonx1 gen() 10464 MB/s May 9 23:54:29.312003 kernel: raid6: int64x8 gen() 6905 MB/s May 9 23:54:29.329001 kernel: raid6: int64x4 gen() 7313 MB/s May 9 23:54:29.346001 kernel: raid6: int64x2 gen() 6121 MB/s May 9 23:54:29.363176 kernel: raid6: int64x1 gen() 5020 MB/s May 9 23:54:29.363192 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s May 9 23:54:29.381105 kernel: raid6: .... xor() 11862 MB/s, rmw enabled May 9 23:54:29.381126 kernel: raid6: using neon recovery algorithm May 9 23:54:29.385999 kernel: xor: measuring software checksum speed May 9 23:54:29.387360 kernel: 8regs : 17267 MB/sec May 9 23:54:29.387373 kernel: 32regs : 19594 MB/sec May 9 23:54:29.388685 kernel: arm64_neon : 26998 MB/sec May 9 23:54:29.388703 kernel: xor: using function: arm64_neon (26998 MB/sec) May 9 23:54:29.440009 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:54:29.450575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:54:29.462174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:54:29.473265 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 9 23:54:29.476461 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:54:29.483285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:54:29.494566 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 9 23:54:29.520740 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:54:29.529153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:54:29.567595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:54:29.577155 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:54:29.590568 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:54:29.591929 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:54:29.594069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:54:29.596222 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:54:29.605110 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:54:29.614400 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:54:29.620699 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:54:29.624475 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:54:29.628515 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:54:29.628545 kernel: GPT:9289727 != 19775487 May 9 23:54:29.628555 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:54:29.629150 kernel: GPT:9289727 != 19775487 May 9 23:54:29.629180 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:54:29.632002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:54:29.634327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:54:29.634453 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:54:29.637836 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:54:29.639004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:54:29.639157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:54:29.641631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:54:29.650287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:54:29.655011 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (514) May 9 23:54:29.661287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:54:29.664710 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) May 9 23:54:29.670020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:54:29.674308 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:54:29.675589 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:54:29.682096 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:54:29.686708 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:54:29.696136 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:54:29.701163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:54:29.703728 disk-uuid[552]: Primary Header is updated. May 9 23:54:29.703728 disk-uuid[552]: Secondary Entries is updated. May 9 23:54:29.703728 disk-uuid[552]: Secondary Header is updated. May 9 23:54:29.708003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:54:29.727273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:54:30.727245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:54:30.730239 disk-uuid[553]: The operation has completed successfully. May 9 23:54:30.756738 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:54:30.756840 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:54:30.771217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:54:30.774706 sh[573]: Success May 9 23:54:30.788020 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:54:30.818353 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:54:30.834562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:54:30.836763 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:54:30.847667 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 9 23:54:30.847718 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:54:30.847729 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:54:30.849764 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:54:30.849797 kernel: BTRFS info (device dm-0): using free space tree May 9 23:54:30.855423 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:54:30.856543 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:54:30.870162 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:54:30.872027 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:54:30.879481 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:54:30.879529 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:54:30.879548 kernel: BTRFS info (device vda6): using free space tree May 9 23:54:30.882006 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:54:30.892354 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:54:30.894073 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:54:30.899397 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:54:30.906246 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:54:30.987519 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:54:30.998205 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:54:31.026947 systemd-networkd[760]: lo: Link UP May 9 23:54:31.026963 systemd-networkd[760]: lo: Gained carrier May 9 23:54:31.027772 systemd-networkd[760]: Enumeration completed May 9 23:54:31.027859 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:54:31.028192 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:54:31.028195 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:54:31.028992 systemd-networkd[760]: eth0: Link UP May 9 23:54:31.028995 systemd-networkd[760]: eth0: Gained carrier May 9 23:54:31.029003 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:54:31.031007 systemd[1]: Reached target network.target - Network. May 9 23:54:31.040206 ignition[658]: Ignition 2.20.0 May 9 23:54:31.040213 ignition[658]: Stage: fetch-offline May 9 23:54:31.040257 ignition[658]: no configs at "/usr/lib/ignition/base.d" May 9 23:54:31.040265 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:31.040419 ignition[658]: parsed url from cmdline: "" May 9 23:54:31.045031 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:54:31.040422 ignition[658]: no config URL provided May 9 23:54:31.040427 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:54:31.040434 ignition[658]: no config at "/usr/lib/ignition/user.ign" May 9 23:54:31.040465 ignition[658]: op(1): [started] loading QEMU firmware config module May 9 23:54:31.040471 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:54:31.046884 ignition[658]: op(1): [finished] loading QEMU firmware config module May 9 23:54:31.070257 ignition[658]: parsing config with SHA512: d008b789484c9f2de1dcc7c47b83ac9db453209358bd4cebcd618e947f9d4e200fc1da12d2b02cf15ddee3ccf37ae108a29d309778462805b88fde69babf2758 May 9 23:54:31.075610 unknown[658]: fetched base config from "system" May 9 23:54:31.075621 unknown[658]: fetched user config from "qemu" May 9 23:54:31.076063 ignition[658]: fetch-offline: fetch-offline passed May 9 23:54:31.078050 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:54:31.076134 ignition[658]: Ignition finished successfully May 9 23:54:31.079394 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:54:31.089160 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:54:31.099788 ignition[770]: Ignition 2.20.0 May 9 23:54:31.099798 ignition[770]: Stage: kargs May 9 23:54:31.099962 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 9 23:54:31.099972 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:31.100848 ignition[770]: kargs: kargs passed May 9 23:54:31.100890 ignition[770]: Ignition finished successfully May 9 23:54:31.105030 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:54:31.113169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:54:31.122649 ignition[778]: Ignition 2.20.0 May 9 23:54:31.122658 ignition[778]: Stage: disks May 9 23:54:31.122810 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 9 23:54:31.122819 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:31.123712 ignition[778]: disks: disks passed May 9 23:54:31.126033 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:54:31.123752 ignition[778]: Ignition finished successfully May 9 23:54:31.127869 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:54:31.129253 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:54:31.131174 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:54:31.132713 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:54:31.134763 systemd[1]: Reached target basic.target - Basic System. May 9 23:54:31.145127 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:54:31.155121 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:54:31.163856 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:54:31.166744 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:54:31.219924 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:54:31.221522 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 9 23:54:31.221308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:54:31.234080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:54:31.235891 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:54:31.237686 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:54:31.242195 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 9 23:54:31.237732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:54:31.237756 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:54:31.245137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:54:31.249912 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:54:31.249934 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:54:31.249945 kernel: BTRFS info (device vda6): using free space tree May 9 23:54:31.249844 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:54:31.252994 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:54:31.254387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:54:31.294123 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:54:31.298348 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 9 23:54:31.302388 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:54:31.306540 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:54:31.376189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:54:31.384126 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:54:31.386576 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:54:31.392009 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:54:31.410318 ignition[911]: INFO : Ignition 2.20.0 May 9 23:54:31.410318 ignition[911]: INFO : Stage: mount May 9 23:54:31.412139 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:54:31.412139 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:31.412139 ignition[911]: INFO : mount: mount passed May 9 23:54:31.412139 ignition[911]: INFO : Ignition finished successfully May 9 23:54:31.413432 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:54:31.417393 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:54:31.428108 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:54:31.846312 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:54:31.856133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:54:31.862751 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) May 9 23:54:31.862784 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:54:31.862795 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:54:31.864300 kernel: BTRFS info (device vda6): using free space tree May 9 23:54:31.866998 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:54:31.867412 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:54:31.882799 ignition[943]: INFO : Ignition 2.20.0 May 9 23:54:31.882799 ignition[943]: INFO : Stage: files May 9 23:54:31.884601 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:54:31.884601 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:31.884601 ignition[943]: DEBUG : files: compiled without relabeling support, skipping May 9 23:54:31.888321 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:54:31.888321 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:54:31.888321 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:54:31.888321 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:54:31.888321 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:54:31.888321 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:54:31.888321 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:54:31.888321 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:54:31.888321 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:54:31.886861 unknown[943]: wrote ssh authorized keys file for user: core May 9 23:54:32.115949 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:54:32.291116 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:54:32.291116 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:54:32.294772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:54:32.597026 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:54:32.742143 systemd-networkd[760]: eth0: Gained IPv6LL May 9 23:54:33.015148 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:54:33.015148 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:54:33.018748 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:54:33.018748 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 9 23:54:33.018748 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:54:33.040620 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:54:33.043893 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:54:33.046312 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:54:33.046312 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 9 23:54:33.046312 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:54:33.046312 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:54:33.046312 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:54:33.046312 ignition[943]: INFO : files: files passed May 9 23:54:33.046312 ignition[943]: INFO : Ignition finished successfully May 9 23:54:33.047010 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:54:33.060169 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:54:33.063121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:54:33.064386 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:54:33.064464 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:54:33.070606 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:54:33.074085 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:54:33.074085 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:54:33.077447 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:54:33.079516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:54:33.080850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:54:33.091131 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:54:33.108543 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:54:33.108640 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:54:33.110771 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:54:33.112650 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:54:33.114484 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:54:33.115230 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:54:33.131041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:54:33.139108 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:54:33.146643 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:54:33.147895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:54:33.149941 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:54:33.151743 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:54:33.151865 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:54:33.154358 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:54:33.156352 systemd[1]: Stopped target basic.target - Basic System. May 9 23:54:33.157945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:54:33.159691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:54:33.161650 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:54:33.163686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:54:33.165523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:54:33.167454 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:54:33.169380 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:54:33.171076 systemd[1]: Stopped target swap.target - Swaps. May 9 23:54:33.172609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:54:33.172730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:54:33.175024 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:54:33.177037 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:54:33.179113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:54:33.179223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:54:33.181226 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:54:33.181347 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:54:33.184145 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:54:33.184267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:54:33.186284 systemd[1]: Stopped target paths.target - Path Units. May 9 23:54:33.187820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:54:33.187920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:54:33.189874 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:54:33.191727 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:54:33.193319 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:54:33.193405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:54:33.195070 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:54:33.195155 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:54:33.197274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:54:33.197389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:54:33.199122 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:54:33.199236 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:54:33.212255 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:54:33.214031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:54:33.214178 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:54:33.217369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:54:33.218231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:54:33.218356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:54:33.221252 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:54:33.221384 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:54:33.225851 ignition[998]: INFO : Ignition 2.20.0 May 9 23:54:33.225851 ignition[998]: INFO : Stage: umount May 9 23:54:33.228860 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:54:33.228860 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:54:33.228860 ignition[998]: INFO : umount: umount passed May 9 23:54:33.228860 ignition[998]: INFO : Ignition finished successfully May 9 23:54:33.226777 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:54:33.226856 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:54:33.230180 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:54:33.230712 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:54:33.230789 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:54:33.232879 systemd[1]: Stopped target network.target - Network. May 9 23:54:33.233923 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:54:33.234079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:54:33.235660 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:54:33.235713 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:54:33.237365 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:54:33.237414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:54:33.239447 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:54:33.239498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:54:33.241350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:54:33.243018 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:54:33.252595 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:54:33.252726 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:54:33.253013 systemd-networkd[760]: eth0: DHCPv6 lease lost May 9 23:54:33.254874 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:54:33.257014 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:54:33.259591 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:54:33.259637 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:54:33.272086 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:54:33.272999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:54:33.273111 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:54:33.275463 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:54:33.275511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:54:33.277343 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:54:33.277394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:54:33.279531 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:54:33.279580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:54:33.281635 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:54:33.288233 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:54:33.288336 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:54:33.290854 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:54:33.290916 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:54:33.293589 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:54:33.293669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:54:33.297417 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:54:33.297542 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:54:33.299202 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:54:33.299256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:54:33.300908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:54:33.300940 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:54:33.302685 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:54:33.302730 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:54:33.305687 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:54:33.305734 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:54:33.308468 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:54:33.308515 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:54:33.324143 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:54:33.325209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:54:33.325286 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:54:33.327450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:54:33.327499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:54:33.329668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:54:33.329789 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:54:33.332683 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:54:33.334844 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:54:33.344554 systemd[1]: Switching root. May 9 23:54:33.375807 systemd-journald[239]: Journal stopped May 9 23:54:34.118110 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 9 23:54:34.118166 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:54:34.118178 kernel: SELinux: policy capability open_perms=1 May 9 23:54:34.118192 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:54:34.118208 kernel: SELinux: policy capability always_check_network=0 May 9 23:54:34.118228 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:54:34.118242 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:54:34.118252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:54:34.118261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:54:34.118271 kernel: audit: type=1403 audit(1746834873.574:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:54:34.118282 systemd[1]: Successfully loaded SELinux policy in 32.585ms. May 9 23:54:34.118295 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.696ms. May 9 23:54:34.118306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:54:34.118319 systemd[1]: Detected virtualization kvm. May 9 23:54:34.118330 systemd[1]: Detected architecture arm64. May 9 23:54:34.118340 systemd[1]: Detected first boot. May 9 23:54:34.118354 systemd[1]: Initializing machine ID from VM UUID. May 9 23:54:34.118365 zram_generator::config[1064]: No configuration found. May 9 23:54:34.118376 systemd[1]: Populated /etc with preset unit settings. May 9 23:54:34.118387 systemd[1]: Queued start job for default target multi-user.target. May 9 23:54:34.118397 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:54:34.118409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:54:34.118420 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:54:34.118430 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:54:34.118441 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:54:34.118451 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:54:34.118462 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:54:34.118485 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:54:34.118496 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:54:34.118510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:54:34.118521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:54:34.118532 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:54:34.118542 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:54:34.118553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:54:34.118565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:54:34.118575 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:54:34.118586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:54:34.118596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:54:34.118608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:54:34.118620 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:54:34.118630 systemd[1]: Reached target slices.target - Slice Units. May 9 23:54:34.118640 systemd[1]: Reached target swap.target - Swaps. May 9 23:54:34.118651 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:54:34.118662 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:54:34.118673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:54:34.118683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:54:34.118694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:54:34.118707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:54:34.118718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:54:34.118728 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:54:34.118739 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:54:34.118749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:54:34.118760 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:54:34.118770 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:54:34.118781 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:54:34.118791 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:54:34.118803 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:54:34.118814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:54:34.118824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:54:34.118834 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:54:34.118845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:54:34.118855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:54:34.118865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:54:34.118876 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:54:34.118888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:54:34.118899 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:54:34.118909 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 23:54:34.118920 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 23:54:34.118930 kernel: fuse: init (API version 7.39) May 9 23:54:34.118940 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:54:34.118951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:54:34.118961 kernel: loop: module loaded May 9 23:54:34.118971 kernel: ACPI: bus type drm_connector registered May 9 23:54:34.118991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:54:34.119002 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:54:34.119012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:54:34.119023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:54:34.119049 systemd-journald[1146]: Collecting audit messages is disabled. May 9 23:54:34.119071 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:54:34.119082 systemd-journald[1146]: Journal started May 9 23:54:34.119105 systemd-journald[1146]: Runtime Journal (/run/log/journal/23fc9b1ba7cd4cf9afeef0e7dc4b92ab) is 5.9M, max 47.3M, 41.4M free. May 9 23:54:34.122217 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:54:34.123223 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:54:34.124310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:54:34.125499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:54:34.126698 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:54:34.127955 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:54:34.129445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:54:34.130877 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:54:34.131056 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:54:34.132461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:54:34.132617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:54:34.134037 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:54:34.134193 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:54:34.135660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:54:34.135825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:54:34.137316 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:54:34.137473 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:54:34.138814 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:54:34.139039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:54:34.140761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:54:34.142372 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:54:34.143905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:54:34.155745 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:54:34.166058 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:54:34.169098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:54:34.170248 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:54:34.172190 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:54:34.174506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:54:34.175800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:54:34.177185 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:54:34.178408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:54:34.183131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:54:34.185687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:54:34.185914 systemd-journald[1146]: Time spent on flushing to /var/log/journal/23fc9b1ba7cd4cf9afeef0e7dc4b92ab is 15.227ms for 845 entries. May 9 23:54:34.185914 systemd-journald[1146]: System Journal (/var/log/journal/23fc9b1ba7cd4cf9afeef0e7dc4b92ab) is 8.0M, max 195.6M, 187.6M free. May 9 23:54:34.208693 systemd-journald[1146]: Received client request to flush runtime journal. May 9 23:54:34.192444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:54:34.193814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:54:34.195160 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:54:34.196580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:54:34.200095 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:54:34.212927 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 9 23:54:34.212946 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 9 23:54:34.214128 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:54:34.216254 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:54:34.219209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:54:34.223290 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:54:34.226394 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:54:34.232103 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:54:34.251871 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:54:34.270116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:54:34.281460 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 9 23:54:34.281480 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 9 23:54:34.285337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:54:34.598805 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:54:34.616119 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:54:34.635325 systemd-udevd[1222]: Using default interface naming scheme 'v255'. May 9 23:54:34.648324 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:54:34.666145 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:54:34.687193 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:54:34.692234 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 9 23:54:34.695009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1224) May 9 23:54:34.727625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:54:34.737824 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:54:34.764668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:54:34.774046 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:54:34.776784 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:54:34.807489 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:54:34.827524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:54:34.827660 systemd-networkd[1230]: lo: Link UP May 9 23:54:34.827664 systemd-networkd[1230]: lo: Gained carrier May 9 23:54:34.830186 systemd-networkd[1230]: Enumeration completed May 9 23:54:34.830333 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:54:34.831854 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:54:34.832452 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:54:34.832462 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:54:34.833157 systemd-networkd[1230]: eth0: Link UP May 9 23:54:34.833167 systemd-networkd[1230]: eth0: Gained carrier May 9 23:54:34.833178 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:54:34.833494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:54:34.844160 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:54:34.846590 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:54:34.850224 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:54:34.853031 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:54:34.884557 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:54:34.886066 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:54:34.887309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:54:34.887339 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:54:34.888333 systemd[1]: Reached target machines.target - Containers. May 9 23:54:34.890255 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:54:34.899123 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:54:34.901505 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:54:34.902599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:54:34.903538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:54:34.905761 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:54:34.908938 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:54:34.915141 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:54:34.916898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:54:34.921006 kernel: loop0: detected capacity change from 0 to 113536 May 9 23:54:34.930104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:54:34.930778 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:54:34.937005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:54:34.970021 kernel: loop1: detected capacity change from 0 to 116808 May 9 23:54:35.008007 kernel: loop2: detected capacity change from 0 to 194096 May 9 23:54:35.050018 kernel: loop3: detected capacity change from 0 to 113536 May 9 23:54:35.055017 kernel: loop4: detected capacity change from 0 to 116808 May 9 23:54:35.063005 kernel: loop5: detected capacity change from 0 to 194096 May 9 23:54:35.066886 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:54:35.067295 (sd-merge)[1290]: Merged extensions into '/usr'. May 9 23:54:35.071085 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:54:35.071329 systemd[1]: Reloading... May 9 23:54:35.113076 zram_generator::config[1321]: No configuration found. May 9 23:54:35.152881 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:54:35.210662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:54:35.252204 systemd[1]: Reloading finished in 180 ms. May 9 23:54:35.264752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:54:35.266278 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:54:35.281111 systemd[1]: Starting ensure-sysext.service... May 9 23:54:35.283157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:54:35.288024 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... May 9 23:54:35.288038 systemd[1]: Reloading... May 9 23:54:35.299324 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:54:35.299595 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:54:35.300235 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:54:35.300453 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 9 23:54:35.300501 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 9 23:54:35.302725 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:54:35.302739 systemd-tmpfiles[1362]: Skipping /boot May 9 23:54:35.312391 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:54:35.312409 systemd-tmpfiles[1362]: Skipping /boot May 9 23:54:35.327009 zram_generator::config[1391]: No configuration found. May 9 23:54:35.414532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:54:35.456090 systemd[1]: Reloading finished in 167 ms. May 9 23:54:35.470757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:54:35.480067 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:54:35.482651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:54:35.485110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:54:35.490141 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:54:35.493264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:54:35.505463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:54:35.506851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:54:35.515237 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:54:35.518110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:54:35.519375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:54:35.521920 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:54:35.526707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:54:35.526867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:54:35.528689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:54:35.528842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:54:35.530888 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:54:35.531075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:54:35.543737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:54:35.545477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:54:35.557267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:54:35.560222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:54:35.563356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:54:35.564452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:54:35.566057 augenrules[1477]: No rules May 9 23:54:35.568242 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:54:35.569295 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:54:35.570240 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:54:35.570475 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:54:35.574078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:54:35.575000 systemd-resolved[1437]: Positive Trust Anchors: May 9 23:54:35.575077 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:54:35.575109 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:54:35.575871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:54:35.576073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:54:35.577653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:54:35.577802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:54:35.579658 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:54:35.579874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:54:35.583904 systemd-resolved[1437]: Defaulting to hostname 'linux'. May 9 23:54:35.585919 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:54:35.587394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:54:35.591141 systemd[1]: Reached target network.target - Network. May 9 23:54:35.592072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:54:35.601226 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:54:35.602322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:54:35.603432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:54:35.605497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:54:35.610150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:54:35.613405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:54:35.614841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:54:35.614989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:54:35.615659 systemd[1]: Finished ensure-sysext.service. May 9 23:54:35.617041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:54:35.617435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:54:35.619256 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:54:35.619402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:54:35.620570 augenrules[1495]: /sbin/augenrules: No change May 9 23:54:35.620781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:54:35.620914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:54:35.622614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:54:35.622797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:54:35.628024 augenrules[1522]: No rules May 9 23:54:35.628134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:54:35.628239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:54:35.639195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:54:35.640563 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:54:35.640783 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:54:35.683190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:54:35.683939 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:54:35.684014 systemd-timesyncd[1532]: Initial clock synchronization to Fri 2025-05-09 23:54:35.499137 UTC. May 9 23:54:35.684775 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:54:35.685952 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:54:35.687193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:54:35.688438 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:54:35.689677 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:54:35.689714 systemd[1]: Reached target paths.target - Path Units. May 9 23:54:35.690643 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:54:35.691787 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:54:35.692945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:54:35.694169 systemd[1]: Reached target timers.target - Timer Units. May 9 23:54:35.695858 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:54:35.698482 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:54:35.700519 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:54:35.705005 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:54:35.706068 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:54:35.707049 systemd[1]: Reached target basic.target - Basic System. May 9 23:54:35.708127 systemd[1]: System is tainted: cgroupsv1 May 9 23:54:35.708174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:54:35.708194 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:54:35.709392 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:54:35.711577 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:54:35.713648 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:54:35.716918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:54:35.717941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:54:35.719264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:54:35.728082 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:54:35.730652 jq[1539]: false May 9 23:54:35.733045 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:54:35.738141 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:54:35.739685 extend-filesystems[1541]: Found loop3 May 9 23:54:35.740700 extend-filesystems[1541]: Found loop4 May 9 23:54:35.740700 extend-filesystems[1541]: Found loop5 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda May 9 23:54:35.740700 extend-filesystems[1541]: Found vda1 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda2 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda3 May 9 23:54:35.740700 extend-filesystems[1541]: Found usr May 9 23:54:35.740700 extend-filesystems[1541]: Found vda4 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda6 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda7 May 9 23:54:35.740700 extend-filesystems[1541]: Found vda9 May 9 23:54:35.740700 extend-filesystems[1541]: Checking size of /dev/vda9 May 9 23:54:35.741649 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:54:35.755533 dbus-daemon[1538]: [system] SELinux support is enabled May 9 23:54:35.745615 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:54:35.748946 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:54:35.752459 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:54:35.756540 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:54:35.761279 jq[1560]: true May 9 23:54:35.765295 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:54:35.765524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:54:35.765754 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:54:35.765941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:54:35.768918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:54:35.769502 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:54:35.774244 extend-filesystems[1541]: Resized partition /dev/vda9 May 9 23:54:35.780994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1234) May 9 23:54:35.784602 extend-filesystems[1574]: resize2fs 1.47.1 (20-May-2024) May 9 23:54:35.789048 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:54:35.793269 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:54:35.802768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:54:35.802805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:54:35.804918 jq[1570]: true May 9 23:54:35.805155 update_engine[1558]: I20250509 23:54:35.805011 1558 main.cc:92] Flatcar Update Engine starting May 9 23:54:35.805306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:54:35.805326 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:54:35.811325 update_engine[1558]: I20250509 23:54:35.811239 1558 update_check_scheduler.cc:74] Next update check in 11m25s May 9 23:54:35.819622 systemd[1]: Started update-engine.service - Update Engine. May 9 23:54:35.825962 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:54:35.826967 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:54:35.829198 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:54:35.829296 systemd-logind[1556]: New seat seat0. May 9 23:54:35.833169 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:54:35.840893 tar[1567]: linux-arm64/helm May 9 23:54:35.835116 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:54:35.844260 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:54:35.844260 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:54:35.844260 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:54:35.843119 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:54:35.858482 extend-filesystems[1541]: Resized filesystem in /dev/vda9 May 9 23:54:35.843389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:54:35.880373 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:54:35.881908 bash[1602]: Updated "/home/core/.ssh/authorized_keys" May 9 23:54:35.883428 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:54:35.885358 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:54:36.009335 containerd[1571]: time="2025-05-09T23:54:36.009218298Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 23:54:36.039686 containerd[1571]: time="2025-05-09T23:54:36.039634939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041028 containerd[1571]: time="2025-05-09T23:54:36.040965569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:54:36.041028 containerd[1571]: time="2025-05-09T23:54:36.041018244Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:54:36.041113 containerd[1571]: time="2025-05-09T23:54:36.041037782Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:54:36.041204 containerd[1571]: time="2025-05-09T23:54:36.041184826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:54:36.041231 containerd[1571]: time="2025-05-09T23:54:36.041206826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041286 containerd[1571]: time="2025-05-09T23:54:36.041270755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:54:36.041307 containerd[1571]: time="2025-05-09T23:54:36.041288496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041500 containerd[1571]: time="2025-05-09T23:54:36.041474851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:54:36.041500 containerd[1571]: time="2025-05-09T23:54:36.041493764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041548 containerd[1571]: time="2025-05-09T23:54:36.041507011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:54:36.041548 containerd[1571]: time="2025-05-09T23:54:36.041516976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041598 containerd[1571]: time="2025-05-09T23:54:36.041583523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041778 containerd[1571]: time="2025-05-09T23:54:36.041760461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:54:36.041907 containerd[1571]: time="2025-05-09T23:54:36.041877846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:54:36.041907 containerd[1571]: time="2025-05-09T23:54:36.041895900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:54:36.042014 containerd[1571]: time="2025-05-09T23:54:36.041993630Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:54:36.042052 containerd[1571]: time="2025-05-09T23:54:36.042045328Z" level=info msg="metadata content store policy set" policy=shared May 9 23:54:36.047630 containerd[1571]: time="2025-05-09T23:54:36.047586752Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:54:36.047630 containerd[1571]: time="2025-05-09T23:54:36.047629463Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:54:36.047699 containerd[1571]: time="2025-05-09T23:54:36.047643843Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:54:36.047699 containerd[1571]: time="2025-05-09T23:54:36.047664319Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:54:36.047699 containerd[1571]: time="2025-05-09T23:54:36.047676784Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:54:36.047872 containerd[1571]: time="2025-05-09T23:54:36.047807690Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:54:36.048229 containerd[1571]: time="2025-05-09T23:54:36.048209161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:54:36.048359 containerd[1571]: time="2025-05-09T23:54:36.048341083Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:54:36.048396 containerd[1571]: time="2025-05-09T23:54:36.048362536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:54:36.048396 containerd[1571]: time="2025-05-09T23:54:36.048376994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:54:36.048396 containerd[1571]: time="2025-05-09T23:54:36.048391179Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048449 containerd[1571]: time="2025-05-09T23:54:36.048411577Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048449 containerd[1571]: time="2025-05-09T23:54:36.048425879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048449 containerd[1571]: time="2025-05-09T23:54:36.048438579Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048505 containerd[1571]: time="2025-05-09T23:54:36.048452333Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048505 containerd[1571]: time="2025-05-09T23:54:36.048465307Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048505 containerd[1571]: time="2025-05-09T23:54:36.048483946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048505 containerd[1571]: time="2025-05-09T23:54:36.048495669Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:54:36.048574 containerd[1571]: time="2025-05-09T23:54:36.048515520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048574 containerd[1571]: time="2025-05-09T23:54:36.048528572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048574 containerd[1571]: time="2025-05-09T23:54:36.048540021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048574 containerd[1571]: time="2025-05-09T23:54:36.048560263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048642 containerd[1571]: time="2025-05-09T23:54:36.048574056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048642 containerd[1571]: time="2025-05-09T23:54:36.048587186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048642 containerd[1571]: time="2025-05-09T23:54:36.048598401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048642 containerd[1571]: time="2025-05-09T23:54:36.048613953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048642 containerd[1571]: time="2025-05-09T23:54:36.048632866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048647794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048659517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048670653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048681399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048695115Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:54:36.048720 containerd[1571]: time="2025-05-09T23:54:36.048720788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048816 containerd[1571]: time="2025-05-09T23:54:36.048733957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:54:36.048816 containerd[1571]: time="2025-05-09T23:54:36.048744117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:54:36.048944 containerd[1571]: time="2025-05-09T23:54:36.048925510Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:54:36.048992 containerd[1571]: time="2025-05-09T23:54:36.048946572Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:54:36.048992 containerd[1571]: time="2025-05-09T23:54:36.048957591Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:54:36.049050 containerd[1571]: time="2025-05-09T23:54:36.049029531Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:54:36.049076 containerd[1571]: time="2025-05-09T23:54:36.049047741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:54:36.049076 containerd[1571]: time="2025-05-09T23:54:36.049060323Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:54:36.049122 containerd[1571]: time="2025-05-09T23:54:36.049076462Z" level=info msg="NRI interface is disabled by configuration." May 9 23:54:36.049122 containerd[1571]: time="2025-05-09T23:54:36.049090295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:54:36.049505 containerd[1571]: time="2025-05-09T23:54:36.049438700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:54:36.049505 containerd[1571]: time="2025-05-09T23:54:36.049486608Z" level=info msg="Connect containerd service" May 9 23:54:36.049682 containerd[1571]: time="2025-05-09T23:54:36.049527403Z" level=info msg="using legacy CRI server" May 9 23:54:36.049682 containerd[1571]: time="2025-05-09T23:54:36.049534633Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:54:36.049797 containerd[1571]: time="2025-05-09T23:54:36.049779095Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:54:36.051717 containerd[1571]: time="2025-05-09T23:54:36.051683444Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:54:36.052227 containerd[1571]: time="2025-05-09T23:54:36.052208279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:54:36.052276 containerd[1571]: time="2025-05-09T23:54:36.052263650Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:54:36.052384 containerd[1571]: time="2025-05-09T23:54:36.052363569Z" level=info msg="Start subscribing containerd event" May 9 23:54:36.052412 containerd[1571]: time="2025-05-09T23:54:36.052401668Z" level=info msg="Start recovering state" May 9 23:54:36.052483 containerd[1571]: time="2025-05-09T23:54:36.052466066Z" level=info msg="Start event monitor" May 9 23:54:36.052510 containerd[1571]: time="2025-05-09T23:54:36.052479391Z" level=info msg="Start snapshots syncer" May 9 23:54:36.052510 containerd[1571]: time="2025-05-09T23:54:36.052493537Z" level=info msg="Start cni network conf syncer for default" May 9 23:54:36.052510 containerd[1571]: time="2025-05-09T23:54:36.052500492Z" level=info msg="Start streaming server" May 9 23:54:36.052717 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:54:36.054678 containerd[1571]: time="2025-05-09T23:54:36.054638089Z" level=info msg="containerd successfully booted in 0.046464s" May 9 23:54:36.168739 tar[1567]: linux-arm64/LICENSE May 9 23:54:36.168864 tar[1567]: linux-arm64/README.md May 9 23:54:36.178417 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:54:36.390152 systemd-networkd[1230]: eth0: Gained IPv6LL May 9 23:54:36.395609 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:54:36.397701 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:54:36.406177 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:54:36.408650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:54:36.411221 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:54:36.430334 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:54:36.430672 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:54:36.432567 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:54:36.445412 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:54:36.646715 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:54:36.665787 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:54:36.674339 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:54:36.679025 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:54:36.679265 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:54:36.682538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:54:36.694374 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:54:36.697240 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:54:36.699335 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:54:36.700680 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:54:36.881857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:54:36.883436 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:54:36.884620 systemd[1]: Startup finished in 5.437s (kernel) + 3.344s (userspace) = 8.781s. May 9 23:54:36.886706 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:54:37.331142 kubelet[1675]: E0509 23:54:37.331086 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:54:37.333668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:54:37.333862 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:54:41.387304 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:54:41.399278 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). May 9 23:54:41.454459 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:54:41.456119 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:54:41.464577 systemd-logind[1556]: New session 1 of user core. May 9 23:54:41.465559 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:54:41.477215 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:54:41.486896 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:54:41.489266 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:54:41.496086 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:54:41.572972 systemd[1696]: Queued start job for default target default.target. May 9 23:54:41.573617 systemd[1696]: Created slice app.slice - User Application Slice. May 9 23:54:41.573645 systemd[1696]: Reached target paths.target - Paths. May 9 23:54:41.573658 systemd[1696]: Reached target timers.target - Timers. May 9 23:54:41.581095 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:54:41.586599 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:54:41.586655 systemd[1696]: Reached target sockets.target - Sockets. May 9 23:54:41.586667 systemd[1696]: Reached target basic.target - Basic System. May 9 23:54:41.586707 systemd[1696]: Reached target default.target - Main User Target. May 9 23:54:41.586729 systemd[1696]: Startup finished in 85ms. May 9 23:54:41.587017 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:54:41.588687 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:54:41.646229 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:39056.service - OpenSSH per-connection server daemon (10.0.0.1:39056). May 9 23:54:41.685661 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:54:41.686886 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:54:41.691134 systemd-logind[1556]: New session 2 of user core. May 9 23:54:41.707260 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:54:41.758064 sshd[1711]: Connection closed by 10.0.0.1 port 39056 May 9 23:54:41.759376 sshd-session[1708]: pam_unix(sshd:session): session closed for user core May 9 23:54:41.772331 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). May 9 23:54:41.772687 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:39056.service: Deactivated successfully. May 9 23:54:41.774840 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. May 9 23:54:41.775015 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:54:41.776351 systemd-logind[1556]: Removed session 2. May 9 23:54:41.811642 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:54:41.812836 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:54:41.816886 systemd-logind[1556]: New session 3 of user core. May 9 23:54:41.825212 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:54:41.873230 sshd[1719]: Connection closed by 10.0.0.1 port 39066 May 9 23:54:41.873856 sshd-session[1713]: pam_unix(sshd:session): session closed for user core May 9 23:54:41.889229 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:39080.service - OpenSSH per-connection server daemon (10.0.0.1:39080). May 9 23:54:41.889608 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:39066.service: Deactivated successfully. May 9 23:54:41.891923 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:54:41.892089 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. May 9 23:54:41.893374 systemd-logind[1556]: Removed session 3. May 9 23:54:41.927183 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 39080 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:54:41.928252 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:54:41.932042 systemd-logind[1556]: New session 4 of user core. May 9 23:54:41.943233 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:54:41.996859 sshd[1727]: Connection closed by 10.0.0.1 port 39080 May 9 23:54:41.997181 sshd-session[1721]: pam_unix(sshd:session): session closed for user core May 9 23:54:42.006218 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:39088.service - OpenSSH per-connection server daemon (10.0.0.1:39088). May 9 23:54:42.006586 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:39080.service: Deactivated successfully. May 9 23:54:42.009005 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:54:42.009331 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. May 9 23:54:42.010281 systemd-logind[1556]: Removed session 4. May 9 23:54:42.044170 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 39088 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:54:42.045371 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:54:42.048983 systemd-logind[1556]: New session 5 of user core. May 9 23:54:42.061203 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:54:42.123190 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:54:42.123452 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:54:42.457200 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:54:42.457356 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:54:42.703679 dockerd[1758]: time="2025-05-09T23:54:42.703614386Z" level=info msg="Starting up" May 9 23:54:42.940350 dockerd[1758]: time="2025-05-09T23:54:42.940244983Z" level=info msg="Loading containers: start." May 9 23:54:43.073003 kernel: Initializing XFRM netlink socket May 9 23:54:43.135595 systemd-networkd[1230]: docker0: Link UP May 9 23:54:43.164243 dockerd[1758]: time="2025-05-09T23:54:43.164202911Z" level=info msg="Loading containers: done." May 9 23:54:43.177353 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3392593584-merged.mount: Deactivated successfully. May 9 23:54:43.177761 dockerd[1758]: time="2025-05-09T23:54:43.177725694Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:54:43.178022 dockerd[1758]: time="2025-05-09T23:54:43.177821933Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 23:54:43.178022 dockerd[1758]: time="2025-05-09T23:54:43.177920273Z" level=info msg="Daemon has completed initialization" May 9 23:54:43.206655 dockerd[1758]: time="2025-05-09T23:54:43.206529212Z" level=info msg="API listen on /run/docker.sock" May 9 23:54:43.206873 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:54:43.924023 containerd[1571]: time="2025-05-09T23:54:43.923894143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 23:54:44.493715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733296304.mount: Deactivated successfully. May 9 23:54:45.376182 containerd[1571]: time="2025-05-09T23:54:45.376106984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:45.376561 containerd[1571]: time="2025-05-09T23:54:45.376454554Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 9 23:54:45.377443 containerd[1571]: time="2025-05-09T23:54:45.377402011Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:45.381129 containerd[1571]: time="2025-05-09T23:54:45.381097735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:45.382324 containerd[1571]: time="2025-05-09T23:54:45.382228034Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.458285585s" May 9 23:54:45.382324 containerd[1571]: time="2025-05-09T23:54:45.382270417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 9 23:54:45.400362 containerd[1571]: time="2025-05-09T23:54:45.400329690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 23:54:46.559555 containerd[1571]: time="2025-05-09T23:54:46.559494229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:46.560151 containerd[1571]: time="2025-05-09T23:54:46.560094759Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 9 23:54:46.560848 containerd[1571]: time="2025-05-09T23:54:46.560820563Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:46.563648 containerd[1571]: time="2025-05-09T23:54:46.563601965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:46.565078 containerd[1571]: time="2025-05-09T23:54:46.565035086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.164664235s" May 9 23:54:46.565122 containerd[1571]: time="2025-05-09T23:54:46.565081324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 9 23:54:46.585262 containerd[1571]: time="2025-05-09T23:54:46.585220920Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 23:54:47.499662 containerd[1571]: time="2025-05-09T23:54:47.499584702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:47.500049 containerd[1571]: time="2025-05-09T23:54:47.500002231Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 9 23:54:47.500929 containerd[1571]: time="2025-05-09T23:54:47.500871347Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:47.506319 containerd[1571]: time="2025-05-09T23:54:47.506241292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:47.507535 containerd[1571]: time="2025-05-09T23:54:47.507500086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 922.235704ms" May 9 23:54:47.507613 containerd[1571]: time="2025-05-09T23:54:47.507540749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 9 23:54:47.527177 containerd[1571]: time="2025-05-09T23:54:47.527136598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 23:54:47.584133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:54:47.592206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:54:47.686047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:54:47.690795 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:54:47.735555 kubelet[2054]: E0509 23:54:47.735493 2054 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:54:47.738803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:54:47.739019 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:54:48.513826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70891645.mount: Deactivated successfully. May 9 23:54:48.856404 containerd[1571]: time="2025-05-09T23:54:48.856279229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:48.857404 containerd[1571]: time="2025-05-09T23:54:48.857133000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 9 23:54:48.857989 containerd[1571]: time="2025-05-09T23:54:48.857943652Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:48.860270 containerd[1571]: time="2025-05-09T23:54:48.860216114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:48.861117 containerd[1571]: time="2025-05-09T23:54:48.861072950Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.333889789s" May 9 23:54:48.861541 containerd[1571]: time="2025-05-09T23:54:48.861116507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 9 23:54:48.880866 containerd[1571]: time="2025-05-09T23:54:48.880827409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 23:54:49.393422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922412799.mount: Deactivated successfully. May 9 23:54:49.956042 containerd[1571]: time="2025-05-09T23:54:49.955970341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:49.956584 containerd[1571]: time="2025-05-09T23:54:49.956531448Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 23:54:49.958116 containerd[1571]: time="2025-05-09T23:54:49.957426870Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:49.962007 containerd[1571]: time="2025-05-09T23:54:49.961511101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:49.962419 containerd[1571]: time="2025-05-09T23:54:49.962261236Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.081390784s" May 9 23:54:49.962419 containerd[1571]: time="2025-05-09T23:54:49.962295018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 23:54:49.983196 containerd[1571]: time="2025-05-09T23:54:49.983161319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 23:54:50.533253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056127054.mount: Deactivated successfully. May 9 23:54:50.537509 containerd[1571]: time="2025-05-09T23:54:50.537460658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:50.538530 containerd[1571]: time="2025-05-09T23:54:50.538486313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 9 23:54:50.539524 containerd[1571]: time="2025-05-09T23:54:50.539458758Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:50.542240 containerd[1571]: time="2025-05-09T23:54:50.542178519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:50.543037 containerd[1571]: time="2025-05-09T23:54:50.542879693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 559.676979ms" May 9 23:54:50.543037 containerd[1571]: time="2025-05-09T23:54:50.542919471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 9 23:54:50.564085 containerd[1571]: time="2025-05-09T23:54:50.564046495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 23:54:51.043276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124038308.mount: Deactivated successfully. May 9 23:54:52.331414 containerd[1571]: time="2025-05-09T23:54:52.331325206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:52.333402 containerd[1571]: time="2025-05-09T23:54:52.333108132Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 9 23:54:52.334247 containerd[1571]: time="2025-05-09T23:54:52.334203777Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:52.341184 containerd[1571]: time="2025-05-09T23:54:52.341138141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:54:52.342612 containerd[1571]: time="2025-05-09T23:54:52.342510670Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.778418847s" May 9 23:54:52.342612 containerd[1571]: time="2025-05-09T23:54:52.342563126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 9 23:54:56.715905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:54:56.726241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:54:56.742759 systemd[1]: Reloading requested from client PID 2266 ('systemctl') (unit session-5.scope)... May 9 23:54:56.742775 systemd[1]: Reloading... May 9 23:54:56.803101 zram_generator::config[2309]: No configuration found. May 9 23:54:56.906523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:54:56.960654 systemd[1]: Reloading finished in 217 ms. May 9 23:54:56.997334 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:54:56.997403 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:54:56.997660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:54:56.999490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:54:57.097712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:54:57.102451 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:54:57.143062 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:54:57.143062 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:54:57.143062 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:54:57.143915 kubelet[2362]: I0509 23:54:57.143855 2362 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:54:57.850448 kubelet[2362]: I0509 23:54:57.850408 2362 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 23:54:57.850448 kubelet[2362]: I0509 23:54:57.850438 2362 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:54:57.850658 kubelet[2362]: I0509 23:54:57.850642 2362 server.go:927] "Client rotation is on, will bootstrap in background" May 9 23:54:57.908025 kubelet[2362]: E0509 23:54:57.907994 2362 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.908025 kubelet[2362]: I0509 23:54:57.907983 2362 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:54:57.919042 kubelet[2362]: I0509 23:54:57.919015 2362 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:54:57.920332 kubelet[2362]: I0509 23:54:57.920278 2362 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:54:57.920512 kubelet[2362]: I0509 23:54:57.920332 2362 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 23:54:57.920591 kubelet[2362]: I0509 23:54:57.920578 2362 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:54:57.920591 kubelet[2362]: I0509 23:54:57.920589 2362 container_manager_linux.go:301] "Creating device plugin manager" May 9 23:54:57.920877 kubelet[2362]: I0509 23:54:57.920850 2362 state_mem.go:36] "Initialized new in-memory state store" May 9 23:54:57.923475 kubelet[2362]: I0509 23:54:57.923446 2362 kubelet.go:400] "Attempting to sync node with API server" May 9 23:54:57.923475 kubelet[2362]: I0509 23:54:57.923473 2362 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:54:57.923787 kubelet[2362]: I0509 23:54:57.923769 2362 kubelet.go:312] "Adding apiserver pod source" May 9 23:54:57.923866 kubelet[2362]: I0509 23:54:57.923851 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:54:57.924968 kubelet[2362]: W0509 23:54:57.924838 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.924968 kubelet[2362]: W0509 23:54:57.924894 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.925154 kubelet[2362]: E0509 23:54:57.925003 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.925154 kubelet[2362]: E0509 23:54:57.924917 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.925154 kubelet[2362]: I0509 23:54:57.925044 2362 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:54:57.925436 kubelet[2362]: I0509 23:54:57.925410 2362 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:54:57.925466 kubelet[2362]: W0509 23:54:57.925455 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:54:57.926406 kubelet[2362]: I0509 23:54:57.926252 2362 server.go:1264] "Started kubelet" May 9 23:54:57.928403 kubelet[2362]: I0509 23:54:57.928276 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:54:57.932843 kubelet[2362]: I0509 23:54:57.932794 2362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:54:57.934458 kubelet[2362]: I0509 23:54:57.933816 2362 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 23:54:57.934458 kubelet[2362]: I0509 23:54:57.933916 2362 server.go:455] "Adding debug handlers to kubelet server" May 9 23:54:57.934458 kubelet[2362]: E0509 23:54:57.934201 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e0115a425f845 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 23:54:57.926223941 +0000 UTC m=+0.820688246,LastTimestamp:2025-05-09 23:54:57.926223941 +0000 UTC m=+0.820688246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 23:54:57.934812 kubelet[2362]: I0509 23:54:57.934763 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:54:57.935121 kubelet[2362]: I0509 23:54:57.935017 2362 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:54:57.935210 kubelet[2362]: E0509 23:54:57.935171 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" May 9 23:54:57.936328 kubelet[2362]: I0509 23:54:57.935896 2362 factory.go:221] Registration of the systemd container factory successfully May 9 23:54:57.936328 kubelet[2362]: I0509 23:54:57.936020 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:54:57.936328 kubelet[2362]: E0509 23:54:57.936122 2362 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:54:57.936328 kubelet[2362]: I0509 23:54:57.936175 2362 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:54:57.939067 kubelet[2362]: W0509 23:54:57.939019 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.939469 kubelet[2362]: E0509 23:54:57.939418 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.939469 kubelet[2362]: I0509 23:54:57.939138 2362 reconciler.go:26] "Reconciler: start to sync state" May 9 23:54:57.939942 kubelet[2362]: I0509 23:54:57.939923 2362 factory.go:221] Registration of the containerd container factory successfully May 9 23:54:57.945577 kubelet[2362]: I0509 23:54:57.945052 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:54:57.946103 kubelet[2362]: I0509 23:54:57.946086 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:54:57.946239 kubelet[2362]: I0509 23:54:57.946231 2362 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:54:57.946264 kubelet[2362]: I0509 23:54:57.946254 2362 kubelet.go:2337] "Starting kubelet main sync loop" May 9 23:54:57.946309 kubelet[2362]: E0509 23:54:57.946293 2362 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:54:57.949288 kubelet[2362]: W0509 23:54:57.949223 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.949288 kubelet[2362]: E0509 23:54:57.949287 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:57.957174 kubelet[2362]: I0509 23:54:57.957142 2362 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:54:57.957174 kubelet[2362]: I0509 23:54:57.957164 2362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:54:57.957174 kubelet[2362]: I0509 23:54:57.957182 2362 state_mem.go:36] "Initialized new in-memory state store" May 9 23:54:57.959376 kubelet[2362]: I0509 23:54:57.959346 2362 policy_none.go:49] "None policy: Start" May 9 23:54:57.959961 kubelet[2362]: I0509 23:54:57.959933 2362 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:54:57.960064 kubelet[2362]: I0509 23:54:57.960052 2362 state_mem.go:35] "Initializing new in-memory state store" May 9 23:54:57.965311 kubelet[2362]: I0509 23:54:57.964564 2362 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:54:57.965311 kubelet[2362]: I0509 23:54:57.964760 2362 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:54:57.965311 kubelet[2362]: I0509 23:54:57.964854 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:54:57.966362 kubelet[2362]: E0509 23:54:57.966340 2362 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 23:54:58.035615 kubelet[2362]: I0509 23:54:58.035579 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:54:58.036131 kubelet[2362]: E0509 23:54:58.036104 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 23:54:58.046413 kubelet[2362]: I0509 23:54:58.046356 2362 topology_manager.go:215] "Topology Admit Handler" podUID="27d8737f78446032da41d2b65a05ff43" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 23:54:58.047780 kubelet[2362]: I0509 23:54:58.047368 2362 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 23:54:58.048229 kubelet[2362]: I0509 23:54:58.048195 2362 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 23:54:58.135738 kubelet[2362]: E0509 23:54:58.135681 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" May 9 23:54:58.141092 kubelet[2362]: I0509 23:54:58.141030 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:54:58.141092 kubelet[2362]: I0509 23:54:58.141072 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:54:58.141092 kubelet[2362]: I0509 23:54:58.141093 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:54:58.141260 kubelet[2362]: I0509 23:54:58.141112 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:54:58.141260 kubelet[2362]: I0509 23:54:58.141129 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:54:58.141260 kubelet[2362]: I0509 23:54:58.141168 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 23:54:58.141260 kubelet[2362]: I0509 23:54:58.141219 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:54:58.141260 kubelet[2362]: I0509 23:54:58.141240 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:54:58.141357 kubelet[2362]: I0509 23:54:58.141258 2362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:54:58.238109 kubelet[2362]: I0509 23:54:58.238084 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:54:58.238644 kubelet[2362]: E0509 23:54:58.238473 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 23:54:58.351251 kubelet[2362]: E0509 23:54:58.351163 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:58.351984 containerd[1571]: time="2025-05-09T23:54:58.351880664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27d8737f78446032da41d2b65a05ff43,Namespace:kube-system,Attempt:0,}" May 9 23:54:58.352941 kubelet[2362]: E0509 23:54:58.352860 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:58.353399 kubelet[2362]: E0509 23:54:58.353209 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:58.353462 containerd[1571]: time="2025-05-09T23:54:58.353219823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 23:54:58.353641 containerd[1571]: time="2025-05-09T23:54:58.353615738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 23:54:58.536967 kubelet[2362]: E0509 23:54:58.536824 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" May 9 23:54:58.640407 kubelet[2362]: I0509 23:54:58.640374 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:54:58.640735 kubelet[2362]: E0509 23:54:58.640709 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 23:54:58.957336 kubelet[2362]: W0509 23:54:58.957244 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:58.957336 kubelet[2362]: E0509 23:54:58.957314 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.095396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138319346.mount: Deactivated successfully. May 9 23:54:59.101871 containerd[1571]: time="2025-05-09T23:54:59.101775631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:54:59.106249 containerd[1571]: time="2025-05-09T23:54:59.106205763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:54:59.106751 containerd[1571]: time="2025-05-09T23:54:59.106710901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:54:59.107991 containerd[1571]: time="2025-05-09T23:54:59.107924681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 23:54:59.108904 containerd[1571]: time="2025-05-09T23:54:59.108837223Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:54:59.110666 containerd[1571]: time="2025-05-09T23:54:59.110605368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:54:59.111335 containerd[1571]: time="2025-05-09T23:54:59.111289674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:54:59.112710 containerd[1571]: time="2025-05-09T23:54:59.112652814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:54:59.115023 containerd[1571]: time="2025-05-09T23:54:59.114676445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 760.931223ms" May 9 23:54:59.118514 containerd[1571]: time="2025-05-09T23:54:59.118472896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 765.189189ms" May 9 23:54:59.122345 containerd[1571]: time="2025-05-09T23:54:59.122294281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 770.329918ms" May 9 23:54:59.191701 kubelet[2362]: W0509 23:54:59.191627 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.191701 kubelet[2362]: E0509 23:54:59.191698 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.193167 kubelet[2362]: W0509 23:54:59.193116 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.193238 kubelet[2362]: E0509 23:54:59.193174 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267042514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267116155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267131938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267103528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267218965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:54:59.267289 containerd[1571]: time="2025-05-09T23:54:59.267223919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.267535 containerd[1571]: time="2025-05-09T23:54:59.267238744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.267535 containerd[1571]: time="2025-05-09T23:54:59.267348586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.268813 containerd[1571]: time="2025-05-09T23:54:59.268704133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:54:59.268813 containerd[1571]: time="2025-05-09T23:54:59.268782409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:54:59.269002 containerd[1571]: time="2025-05-09T23:54:59.268801709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.269002 containerd[1571]: time="2025-05-09T23:54:59.268912470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:54:59.319813 containerd[1571]: time="2025-05-09T23:54:59.319659204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6c94cdb1f814d1173b7309c995c5d5a78ca58f8b4575e3e724bbe8a81d5d2c\"" May 9 23:54:59.323750 kubelet[2362]: E0509 23:54:59.323457 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:59.325378 containerd[1571]: time="2025-05-09T23:54:59.325039079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27d8737f78446032da41d2b65a05ff43,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d2c400a91a746e56987da20c1fba05c05d843289a4d60f9b5f5c50dc50c4a3\"" May 9 23:54:59.326114 kubelet[2362]: E0509 23:54:59.326088 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:59.326943 containerd[1571]: time="2025-05-09T23:54:59.326899325Z" level=info msg="CreateContainer within sandbox \"ee6c94cdb1f814d1173b7309c995c5d5a78ca58f8b4575e3e724bbe8a81d5d2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:54:59.327285 containerd[1571]: time="2025-05-09T23:54:59.327249030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"5802070319c5ce5a5f46eee497d95462ac273a390b4865414adf8eea99764e8e\"" May 9 23:54:59.328519 kubelet[2362]: E0509 23:54:59.328494 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:59.328597 containerd[1571]: time="2025-05-09T23:54:59.328532255Z" level=info msg="CreateContainer within sandbox \"d6d2c400a91a746e56987da20c1fba05c05d843289a4d60f9b5f5c50dc50c4a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:54:59.338358 kubelet[2362]: E0509 23:54:59.338315 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" May 9 23:54:59.341471 containerd[1571]: time="2025-05-09T23:54:59.341429673Z" level=info msg="CreateContainer within sandbox \"5802070319c5ce5a5f46eee497d95462ac273a390b4865414adf8eea99764e8e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:54:59.343197 containerd[1571]: time="2025-05-09T23:54:59.343157661Z" level=info msg="CreateContainer within sandbox \"ee6c94cdb1f814d1173b7309c995c5d5a78ca58f8b4575e3e724bbe8a81d5d2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6cf894531c9cc306e5fe2c94585671293ce6f2e4fcb6bcba3180ac2d76cad259\"" May 9 23:54:59.344667 containerd[1571]: time="2025-05-09T23:54:59.344635238Z" level=info msg="StartContainer for \"6cf894531c9cc306e5fe2c94585671293ce6f2e4fcb6bcba3180ac2d76cad259\"" May 9 23:54:59.345949 containerd[1571]: time="2025-05-09T23:54:59.345884339Z" level=info msg="CreateContainer within sandbox \"d6d2c400a91a746e56987da20c1fba05c05d843289a4d60f9b5f5c50dc50c4a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcc84d6b08052c9a9c802232088c0674f3b67b7c23dc28b2db49160e001120a0\"" May 9 23:54:59.346770 containerd[1571]: time="2025-05-09T23:54:59.346737505Z" level=info msg="StartContainer for \"fcc84d6b08052c9a9c802232088c0674f3b67b7c23dc28b2db49160e001120a0\"" May 9 23:54:59.356552 containerd[1571]: time="2025-05-09T23:54:59.356457128Z" level=info msg="CreateContainer within sandbox \"5802070319c5ce5a5f46eee497d95462ac273a390b4865414adf8eea99764e8e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3b72a7a1f9ef96a9e4ee34cb2d73687d7441e56dd313dabed79130aea43b3d18\"" May 9 23:54:59.358036 containerd[1571]: time="2025-05-09T23:54:59.357623558Z" level=info msg="StartContainer for \"3b72a7a1f9ef96a9e4ee34cb2d73687d7441e56dd313dabed79130aea43b3d18\"" May 9 23:54:59.406911 containerd[1571]: time="2025-05-09T23:54:59.404753888Z" level=info msg="StartContainer for \"6cf894531c9cc306e5fe2c94585671293ce6f2e4fcb6bcba3180ac2d76cad259\" returns successfully" May 9 23:54:59.428115 containerd[1571]: time="2025-05-09T23:54:59.427870874Z" level=info msg="StartContainer for \"fcc84d6b08052c9a9c802232088c0674f3b67b7c23dc28b2db49160e001120a0\" returns successfully" May 9 23:54:59.428115 containerd[1571]: time="2025-05-09T23:54:59.427968210Z" level=info msg="StartContainer for \"3b72a7a1f9ef96a9e4ee34cb2d73687d7441e56dd313dabed79130aea43b3d18\" returns successfully" May 9 23:54:59.444831 kubelet[2362]: I0509 23:54:59.442653 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:54:59.444831 kubelet[2362]: E0509 23:54:59.443039 2362 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 23:54:59.479480 kubelet[2362]: W0509 23:54:59.478857 2362 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.479480 kubelet[2362]: E0509 23:54:59.478906 2362 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 23:54:59.960845 kubelet[2362]: E0509 23:54:59.960805 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:59.963774 kubelet[2362]: E0509 23:54:59.963691 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:54:59.965672 kubelet[2362]: E0509 23:54:59.965639 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:00.967061 kubelet[2362]: E0509 23:55:00.967029 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:00.990715 kubelet[2362]: E0509 23:55:00.990668 2362 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 23:55:01.044440 kubelet[2362]: I0509 23:55:01.044410 2362 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:55:01.176893 kubelet[2362]: I0509 23:55:01.176743 2362 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 23:55:01.271565 kubelet[2362]: E0509 23:55:01.271447 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.372571 kubelet[2362]: E0509 23:55:01.372512 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.473369 kubelet[2362]: E0509 23:55:01.473307 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.574205 kubelet[2362]: E0509 23:55:01.573826 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.674381 kubelet[2362]: E0509 23:55:01.674338 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.775065 kubelet[2362]: E0509 23:55:01.775008 2362 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:55:01.927003 kubelet[2362]: I0509 23:55:01.926902 2362 apiserver.go:52] "Watching apiserver" May 9 23:55:01.938309 kubelet[2362]: I0509 23:55:01.938273 2362 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:55:02.790783 kubelet[2362]: E0509 23:55:02.790752 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:02.969618 kubelet[2362]: E0509 23:55:02.969425 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:03.050382 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-5.scope)... May 9 23:55:03.050403 systemd[1]: Reloading... May 9 23:55:03.116127 zram_generator::config[2686]: No configuration found. May 9 23:55:03.290171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:55:03.354373 systemd[1]: Reloading finished in 303 ms. May 9 23:55:03.385630 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:55:03.395273 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:55:03.395618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:55:03.403403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:55:03.496184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:55:03.499362 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:55:03.544079 kubelet[2736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:55:03.544079 kubelet[2736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:55:03.544079 kubelet[2736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:55:03.544462 kubelet[2736]: I0509 23:55:03.544129 2736 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:55:03.548323 kubelet[2736]: I0509 23:55:03.548278 2736 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 23:55:03.548323 kubelet[2736]: I0509 23:55:03.548307 2736 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:55:03.548572 kubelet[2736]: I0509 23:55:03.548546 2736 server.go:927] "Client rotation is on, will bootstrap in background" May 9 23:55:03.549902 kubelet[2736]: I0509 23:55:03.549868 2736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:55:03.551116 kubelet[2736]: I0509 23:55:03.551088 2736 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:55:03.557861 kubelet[2736]: I0509 23:55:03.557833 2736 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:55:03.558268 kubelet[2736]: I0509 23:55:03.558238 2736 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:55:03.558439 kubelet[2736]: I0509 23:55:03.558266 2736 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 23:55:03.558509 kubelet[2736]: I0509 23:55:03.558440 2736 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:55:03.558509 kubelet[2736]: I0509 23:55:03.558451 2736 container_manager_linux.go:301] "Creating device plugin manager" May 9 23:55:03.558509 kubelet[2736]: I0509 23:55:03.558484 2736 state_mem.go:36] "Initialized new in-memory state store" May 9 23:55:03.558591 kubelet[2736]: I0509 23:55:03.558579 2736 kubelet.go:400] "Attempting to sync node with API server" May 9 23:55:03.558612 kubelet[2736]: I0509 23:55:03.558594 2736 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:55:03.558634 kubelet[2736]: I0509 23:55:03.558618 2736 kubelet.go:312] "Adding apiserver pod source" May 9 23:55:03.558653 kubelet[2736]: I0509 23:55:03.558636 2736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:55:03.559527 kubelet[2736]: I0509 23:55:03.559490 2736 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:55:03.560209 kubelet[2736]: I0509 23:55:03.559654 2736 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:55:03.561528 kubelet[2736]: I0509 23:55:03.561493 2736 server.go:1264] "Started kubelet" May 9 23:55:03.563323 kubelet[2736]: I0509 23:55:03.563285 2736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:55:03.564959 kubelet[2736]: I0509 23:55:03.564920 2736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:55:03.565113 kubelet[2736]: I0509 23:55:03.565073 2736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:55:03.567990 kubelet[2736]: I0509 23:55:03.566525 2736 server.go:455] "Adding debug handlers to kubelet server" May 9 23:55:03.568226 kubelet[2736]: I0509 23:55:03.568178 2736 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:55:03.572040 kubelet[2736]: I0509 23:55:03.571452 2736 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 23:55:03.572040 kubelet[2736]: I0509 23:55:03.571792 2736 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:55:03.572040 kubelet[2736]: I0509 23:55:03.571940 2736 reconciler.go:26] "Reconciler: start to sync state" May 9 23:55:03.574936 kubelet[2736]: I0509 23:55:03.574551 2736 factory.go:221] Registration of the systemd container factory successfully May 9 23:55:03.574936 kubelet[2736]: I0509 23:55:03.574648 2736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:55:03.578314 kubelet[2736]: E0509 23:55:03.578276 2736 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:55:03.578314 kubelet[2736]: I0509 23:55:03.578294 2736 factory.go:221] Registration of the containerd container factory successfully May 9 23:55:03.585768 kubelet[2736]: I0509 23:55:03.585648 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:55:03.586602 kubelet[2736]: I0509 23:55:03.586574 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:55:03.586602 kubelet[2736]: I0509 23:55:03.586602 2736 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:55:03.586670 kubelet[2736]: I0509 23:55:03.586619 2736 kubelet.go:2337] "Starting kubelet main sync loop" May 9 23:55:03.586708 kubelet[2736]: E0509 23:55:03.586660 2736 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:55:03.623281 kubelet[2736]: I0509 23:55:03.623254 2736 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:55:03.623281 kubelet[2736]: I0509 23:55:03.623275 2736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:55:03.623412 kubelet[2736]: I0509 23:55:03.623297 2736 state_mem.go:36] "Initialized new in-memory state store" May 9 23:55:03.623475 kubelet[2736]: I0509 23:55:03.623459 2736 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:55:03.623523 kubelet[2736]: I0509 23:55:03.623476 2736 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:55:03.623523 kubelet[2736]: I0509 23:55:03.623505 2736 policy_none.go:49] "None policy: Start" May 9 23:55:03.624236 kubelet[2736]: I0509 23:55:03.624220 2736 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:55:03.624277 kubelet[2736]: I0509 23:55:03.624246 2736 state_mem.go:35] "Initializing new in-memory state store" May 9 23:55:03.624410 kubelet[2736]: I0509 23:55:03.624395 2736 state_mem.go:75] "Updated machine memory state" May 9 23:55:03.626230 kubelet[2736]: I0509 23:55:03.625463 2736 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:55:03.626230 kubelet[2736]: I0509 23:55:03.625625 2736 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:55:03.626230 kubelet[2736]: I0509 23:55:03.625712 2736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:55:03.676133 kubelet[2736]: I0509 23:55:03.676105 2736 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 23:55:03.682568 kubelet[2736]: I0509 23:55:03.682532 2736 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 23:55:03.683282 kubelet[2736]: I0509 23:55:03.682624 2736 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 23:55:03.687092 kubelet[2736]: I0509 23:55:03.687034 2736 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 23:55:03.687201 kubelet[2736]: I0509 23:55:03.687190 2736 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 23:55:03.687241 kubelet[2736]: I0509 23:55:03.687227 2736 topology_manager.go:215] "Topology Admit Handler" podUID="27d8737f78446032da41d2b65a05ff43" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 23:55:03.693689 kubelet[2736]: E0509 23:55:03.693645 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.772912 kubelet[2736]: I0509 23:55:03.772873 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.772912 kubelet[2736]: I0509 23:55:03.772910 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 23:55:03.773064 kubelet[2736]: I0509 23:55:03.772932 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:55:03.773064 kubelet[2736]: I0509 23:55:03.772951 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:55:03.773064 kubelet[2736]: I0509 23:55:03.772969 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.773064 kubelet[2736]: I0509 23:55:03.773012 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.773064 kubelet[2736]: I0509 23:55:03.773029 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.773185 kubelet[2736]: I0509 23:55:03.773046 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27d8737f78446032da41d2b65a05ff43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d8737f78446032da41d2b65a05ff43\") " pod="kube-system/kube-apiserver-localhost" May 9 23:55:03.773185 kubelet[2736]: I0509 23:55:03.773065 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:55:03.994930 kubelet[2736]: E0509 23:55:03.994821 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:03.994930 kubelet[2736]: E0509 23:55:03.994898 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:03.995055 kubelet[2736]: E0509 23:55:03.994989 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:04.559181 kubelet[2736]: I0509 23:55:04.559137 2736 apiserver.go:52] "Watching apiserver" May 9 23:55:04.571922 kubelet[2736]: I0509 23:55:04.571886 2736 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:55:04.602032 kubelet[2736]: E0509 23:55:04.601932 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:04.719116 kubelet[2736]: E0509 23:55:04.718577 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 23:55:04.719814 kubelet[2736]: E0509 23:55:04.719527 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:04.720667 kubelet[2736]: E0509 23:55:04.720277 2736 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 23:55:04.722000 kubelet[2736]: E0509 23:55:04.721134 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:04.790437 kubelet[2736]: I0509 23:55:04.790371 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.790351654 podStartE2EDuration="2.790351654s" podCreationTimestamp="2025-05-09 23:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:04.720388171 +0000 UTC m=+1.217751312" watchObservedRunningTime="2025-05-09 23:55:04.790351654 +0000 UTC m=+1.287714794" May 9 23:55:04.790574 kubelet[2736]: I0509 23:55:04.790506 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.790501611 podStartE2EDuration="1.790501611s" podCreationTimestamp="2025-05-09 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:04.790292846 +0000 UTC m=+1.287655987" watchObservedRunningTime="2025-05-09 23:55:04.790501611 +0000 UTC m=+1.287864752" May 9 23:55:04.826743 kubelet[2736]: I0509 23:55:04.825793 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8257750320000001 podStartE2EDuration="1.825775032s" podCreationTimestamp="2025-05-09 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:04.816654443 +0000 UTC m=+1.314017584" watchObservedRunningTime="2025-05-09 23:55:04.825775032 +0000 UTC m=+1.323138173" May 9 23:55:05.099537 sudo[1736]: pam_unix(sudo:session): session closed for user root May 9 23:55:05.101008 sshd[1735]: Connection closed by 10.0.0.1 port 39088 May 9 23:55:05.101336 sshd-session[1729]: pam_unix(sshd:session): session closed for user core May 9 23:55:05.104613 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. May 9 23:55:05.104806 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:39088.service: Deactivated successfully. May 9 23:55:05.106597 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:55:05.107046 systemd-logind[1556]: Removed session 5. May 9 23:55:05.603880 kubelet[2736]: E0509 23:55:05.603533 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:05.603880 kubelet[2736]: E0509 23:55:05.603627 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:05.604328 kubelet[2736]: E0509 23:55:05.603971 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:07.425432 kubelet[2736]: E0509 23:55:07.425115 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:09.574093 kubelet[2736]: E0509 23:55:09.574061 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:09.611865 kubelet[2736]: E0509 23:55:09.610846 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:14.059607 kubelet[2736]: E0509 23:55:14.059196 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:17.439991 kubelet[2736]: E0509 23:55:17.436633 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:17.624617 kubelet[2736]: E0509 23:55:17.624581 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:18.535645 kubelet[2736]: I0509 23:55:18.535590 2736 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:55:18.536055 containerd[1571]: time="2025-05-09T23:55:18.535994221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:55:18.536278 kubelet[2736]: I0509 23:55:18.536182 2736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:55:19.415887 kubelet[2736]: I0509 23:55:19.415744 2736 topology_manager.go:215] "Topology Admit Handler" podUID="1644d227-2837-43d4-afe2-23336badfb1c" podNamespace="kube-system" podName="kube-proxy-jw6jr" May 9 23:55:19.416212 kubelet[2736]: I0509 23:55:19.416116 2736 topology_manager.go:215] "Topology Admit Handler" podUID="69dee42e-50d9-48a1-9b68-77f4cf686897" podNamespace="kube-flannel" podName="kube-flannel-ds-mcj8b" May 9 23:55:19.569203 kubelet[2736]: I0509 23:55:19.569162 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/69dee42e-50d9-48a1-9b68-77f4cf686897-flannel-cfg\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569570 kubelet[2736]: I0509 23:55:19.569216 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskft\" (UniqueName: \"kubernetes.io/projected/1644d227-2837-43d4-afe2-23336badfb1c-kube-api-access-wskft\") pod \"kube-proxy-jw6jr\" (UID: \"1644d227-2837-43d4-afe2-23336badfb1c\") " pod="kube-system/kube-proxy-jw6jr" May 9 23:55:19.569570 kubelet[2736]: I0509 23:55:19.569240 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69dee42e-50d9-48a1-9b68-77f4cf686897-run\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569570 kubelet[2736]: I0509 23:55:19.569255 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/69dee42e-50d9-48a1-9b68-77f4cf686897-cni\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569570 kubelet[2736]: I0509 23:55:19.569271 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phkz\" (UniqueName: \"kubernetes.io/projected/69dee42e-50d9-48a1-9b68-77f4cf686897-kube-api-access-6phkz\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569570 kubelet[2736]: I0509 23:55:19.569288 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1644d227-2837-43d4-afe2-23336badfb1c-kube-proxy\") pod \"kube-proxy-jw6jr\" (UID: \"1644d227-2837-43d4-afe2-23336badfb1c\") " pod="kube-system/kube-proxy-jw6jr" May 9 23:55:19.569699 kubelet[2736]: I0509 23:55:19.569310 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1644d227-2837-43d4-afe2-23336badfb1c-lib-modules\") pod \"kube-proxy-jw6jr\" (UID: \"1644d227-2837-43d4-afe2-23336badfb1c\") " pod="kube-system/kube-proxy-jw6jr" May 9 23:55:19.569699 kubelet[2736]: I0509 23:55:19.569325 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/69dee42e-50d9-48a1-9b68-77f4cf686897-cni-plugin\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569699 kubelet[2736]: I0509 23:55:19.569348 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69dee42e-50d9-48a1-9b68-77f4cf686897-xtables-lock\") pod \"kube-flannel-ds-mcj8b\" (UID: \"69dee42e-50d9-48a1-9b68-77f4cf686897\") " pod="kube-flannel/kube-flannel-ds-mcj8b" May 9 23:55:19.569699 kubelet[2736]: I0509 23:55:19.569371 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1644d227-2837-43d4-afe2-23336badfb1c-xtables-lock\") pod \"kube-proxy-jw6jr\" (UID: \"1644d227-2837-43d4-afe2-23336badfb1c\") " pod="kube-system/kube-proxy-jw6jr" May 9 23:55:19.721835 kubelet[2736]: E0509 23:55:19.721520 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:19.721835 kubelet[2736]: E0509 23:55:19.721568 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:19.722274 containerd[1571]: time="2025-05-09T23:55:19.722177148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jw6jr,Uid:1644d227-2837-43d4-afe2-23336badfb1c,Namespace:kube-system,Attempt:0,}" May 9 23:55:19.722709 containerd[1571]: time="2025-05-09T23:55:19.722673591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mcj8b,Uid:69dee42e-50d9-48a1-9b68-77f4cf686897,Namespace:kube-flannel,Attempt:0,}" May 9 23:55:19.749072 containerd[1571]: time="2025-05-09T23:55:19.748532037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:55:19.749072 containerd[1571]: time="2025-05-09T23:55:19.749047160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:55:19.749386 containerd[1571]: time="2025-05-09T23:55:19.749242601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:19.749443 containerd[1571]: time="2025-05-09T23:55:19.749365562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:19.750971 containerd[1571]: time="2025-05-09T23:55:19.750848092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:55:19.750971 containerd[1571]: time="2025-05-09T23:55:19.750901812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:55:19.750971 containerd[1571]: time="2025-05-09T23:55:19.750912572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:19.751291 containerd[1571]: time="2025-05-09T23:55:19.751242374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:19.787759 containerd[1571]: time="2025-05-09T23:55:19.787670968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jw6jr,Uid:1644d227-2837-43d4-afe2-23336badfb1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d82b238458d9322c19d5a6381195901093f81348ae2973dd26c7bb8373078488\"" May 9 23:55:19.788543 kubelet[2736]: E0509 23:55:19.788518 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:19.792418 containerd[1571]: time="2025-05-09T23:55:19.792380118Z" level=info msg="CreateContainer within sandbox \"d82b238458d9322c19d5a6381195901093f81348ae2973dd26c7bb8373078488\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:55:19.804081 containerd[1571]: time="2025-05-09T23:55:19.804037233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mcj8b,Uid:69dee42e-50d9-48a1-9b68-77f4cf686897,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\"" May 9 23:55:19.804995 kubelet[2736]: E0509 23:55:19.804842 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:19.805993 containerd[1571]: time="2025-05-09T23:55:19.805953925Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 9 23:55:19.821576 containerd[1571]: time="2025-05-09T23:55:19.821514625Z" level=info msg="CreateContainer within sandbox \"d82b238458d9322c19d5a6381195901093f81348ae2973dd26c7bb8373078488\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ab367a3b64ef6a6a09531b4495c5f2fd78ce7559b18096c8c4e4f8a4b63c19e\"" May 9 23:55:19.822294 containerd[1571]: time="2025-05-09T23:55:19.822274230Z" level=info msg="StartContainer for \"0ab367a3b64ef6a6a09531b4495c5f2fd78ce7559b18096c8c4e4f8a4b63c19e\"" May 9 23:55:19.881917 containerd[1571]: time="2025-05-09T23:55:19.880612924Z" level=info msg="StartContainer for \"0ab367a3b64ef6a6a09531b4495c5f2fd78ce7559b18096c8c4e4f8a4b63c19e\" returns successfully" May 9 23:55:20.633050 kubelet[2736]: E0509 23:55:20.632738 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:21.038929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990199939.mount: Deactivated successfully. May 9 23:55:21.078828 containerd[1571]: time="2025-05-09T23:55:21.078776481Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:21.079396 containerd[1571]: time="2025-05-09T23:55:21.079346365Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 9 23:55:21.080231 containerd[1571]: time="2025-05-09T23:55:21.080200810Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:21.082994 containerd[1571]: time="2025-05-09T23:55:21.082576943Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:21.083658 containerd[1571]: time="2025-05-09T23:55:21.083625389Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.277621144s" May 9 23:55:21.083704 containerd[1571]: time="2025-05-09T23:55:21.083665510Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 9 23:55:21.085888 containerd[1571]: time="2025-05-09T23:55:21.085846642Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 9 23:55:21.099032 containerd[1571]: time="2025-05-09T23:55:21.098958438Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5eb040496b5ed6e985857a2be478a4d9e1741f7fb9689f4d8accc0114907cb62\"" May 9 23:55:21.099743 containerd[1571]: time="2025-05-09T23:55:21.099547802Z" level=info msg="StartContainer for \"5eb040496b5ed6e985857a2be478a4d9e1741f7fb9689f4d8accc0114907cb62\"" May 9 23:55:21.119096 update_engine[1558]: I20250509 23:55:21.119030 1558 update_attempter.cc:509] Updating boot flags... May 9 23:55:21.169701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3078) May 9 23:55:21.175110 containerd[1571]: time="2025-05-09T23:55:21.173903113Z" level=info msg="StartContainer for \"5eb040496b5ed6e985857a2be478a4d9e1741f7fb9689f4d8accc0114907cb62\" returns successfully" May 9 23:55:21.194020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3080) May 9 23:55:21.225474 containerd[1571]: time="2025-05-09T23:55:21.225397291Z" level=info msg="shim disconnected" id=5eb040496b5ed6e985857a2be478a4d9e1741f7fb9689f4d8accc0114907cb62 namespace=k8s.io May 9 23:55:21.225474 containerd[1571]: time="2025-05-09T23:55:21.225473331Z" level=warning msg="cleaning up after shim disconnected" id=5eb040496b5ed6e985857a2be478a4d9e1741f7fb9689f4d8accc0114907cb62 namespace=k8s.io May 9 23:55:21.225474 containerd[1571]: time="2025-05-09T23:55:21.225482291Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:55:21.236016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3080) May 9 23:55:21.636486 kubelet[2736]: E0509 23:55:21.636427 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:21.639107 containerd[1571]: time="2025-05-09T23:55:21.639074288Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 9 23:55:21.648421 kubelet[2736]: I0509 23:55:21.648330 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jw6jr" podStartSLOduration=2.648212901 podStartE2EDuration="2.648212901s" podCreationTimestamp="2025-05-09 23:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:20.641350079 +0000 UTC m=+17.138713220" watchObservedRunningTime="2025-05-09 23:55:21.648212901 +0000 UTC m=+18.145576042" May 9 23:55:22.810877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450217770.mount: Deactivated successfully. May 9 23:55:23.343321 containerd[1571]: time="2025-05-09T23:55:23.343262777Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:23.344334 containerd[1571]: time="2025-05-09T23:55:23.344294622Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 9 23:55:23.345708 containerd[1571]: time="2025-05-09T23:55:23.345203467Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:23.348323 containerd[1571]: time="2025-05-09T23:55:23.348283123Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:55:23.350203 containerd[1571]: time="2025-05-09T23:55:23.350155573Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.711040725s" May 9 23:55:23.350203 containerd[1571]: time="2025-05-09T23:55:23.350200773Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 9 23:55:23.365186 containerd[1571]: time="2025-05-09T23:55:23.365140531Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 23:55:23.399118 containerd[1571]: time="2025-05-09T23:55:23.398332786Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b\"" May 9 23:55:23.399722 containerd[1571]: time="2025-05-09T23:55:23.399694273Z" level=info msg="StartContainer for \"a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b\"" May 9 23:55:23.485036 containerd[1571]: time="2025-05-09T23:55:23.484957320Z" level=info msg="StartContainer for \"a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b\" returns successfully" May 9 23:55:23.506621 containerd[1571]: time="2025-05-09T23:55:23.506547474Z" level=info msg="shim disconnected" id=a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b namespace=k8s.io May 9 23:55:23.506963 containerd[1571]: time="2025-05-09T23:55:23.506806755Z" level=warning msg="cleaning up after shim disconnected" id=a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b namespace=k8s.io May 9 23:55:23.506963 containerd[1571]: time="2025-05-09T23:55:23.506822075Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:55:23.518015 containerd[1571]: time="2025-05-09T23:55:23.517209690Z" level=warning msg="cleanup warnings time=\"2025-05-09T23:55:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 23:55:23.557577 kubelet[2736]: I0509 23:55:23.557534 2736 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 23:55:23.589691 kubelet[2736]: I0509 23:55:23.589656 2736 topology_manager.go:215] "Topology Admit Handler" podUID="7861de09-afb3-4175-a386-6ee1384fd32d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcr4" May 9 23:55:23.590179 kubelet[2736]: I0509 23:55:23.589969 2736 topology_manager.go:215] "Topology Admit Handler" podUID="711b69a0-de24-422a-ac6b-a45d0f8f47f1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pnsll" May 9 23:55:23.606518 kubelet[2736]: I0509 23:55:23.606025 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7861de09-afb3-4175-a386-6ee1384fd32d-config-volume\") pod \"coredns-7db6d8ff4d-vgcr4\" (UID: \"7861de09-afb3-4175-a386-6ee1384fd32d\") " pod="kube-system/coredns-7db6d8ff4d-vgcr4" May 9 23:55:23.606780 kubelet[2736]: I0509 23:55:23.606694 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/711b69a0-de24-422a-ac6b-a45d0f8f47f1-config-volume\") pod \"coredns-7db6d8ff4d-pnsll\" (UID: \"711b69a0-de24-422a-ac6b-a45d0f8f47f1\") " pod="kube-system/coredns-7db6d8ff4d-pnsll" May 9 23:55:23.606910 kubelet[2736]: I0509 23:55:23.606879 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wmc\" (UniqueName: \"kubernetes.io/projected/7861de09-afb3-4175-a386-6ee1384fd32d-kube-api-access-55wmc\") pod \"coredns-7db6d8ff4d-vgcr4\" (UID: \"7861de09-afb3-4175-a386-6ee1384fd32d\") " pod="kube-system/coredns-7db6d8ff4d-vgcr4" May 9 23:55:23.607071 kubelet[2736]: I0509 23:55:23.607035 2736 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6fl\" (UniqueName: \"kubernetes.io/projected/711b69a0-de24-422a-ac6b-a45d0f8f47f1-kube-api-access-kv6fl\") pod \"coredns-7db6d8ff4d-pnsll\" (UID: \"711b69a0-de24-422a-ac6b-a45d0f8f47f1\") " pod="kube-system/coredns-7db6d8ff4d-pnsll" May 9 23:55:23.641439 kubelet[2736]: E0509 23:55:23.641407 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:23.643735 containerd[1571]: time="2025-05-09T23:55:23.643695394Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 9 23:55:23.664187 containerd[1571]: time="2025-05-09T23:55:23.664133181Z" level=info msg="CreateContainer within sandbox \"52f204910aab493d3b1815aa0ca1770773434150ed9183008c81ed7e856ae274\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"13539b2943fde261653974835d7467f53c0a1bb91abc464c39d3a4e84a813c6f\"" May 9 23:55:23.664696 containerd[1571]: time="2025-05-09T23:55:23.664665184Z" level=info msg="StartContainer for \"13539b2943fde261653974835d7467f53c0a1bb91abc464c39d3a4e84a813c6f\"" May 9 23:55:23.721240 containerd[1571]: time="2025-05-09T23:55:23.719560472Z" level=info msg="StartContainer for \"13539b2943fde261653974835d7467f53c0a1bb91abc464c39d3a4e84a813c6f\" returns successfully" May 9 23:55:23.754129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6a9b4cff436cdccd07a9160b49ab063deaedc660fcea0d7f2f2f523f56c551b-rootfs.mount: Deactivated successfully. May 9 23:55:23.893964 kubelet[2736]: E0509 23:55:23.893808 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:23.895662 containerd[1571]: time="2025-05-09T23:55:23.895163394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgcr4,Uid:7861de09-afb3-4175-a386-6ee1384fd32d,Namespace:kube-system,Attempt:0,}" May 9 23:55:23.896460 kubelet[2736]: E0509 23:55:23.896421 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:23.897111 containerd[1571]: time="2025-05-09T23:55:23.896997324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnsll,Uid:711b69a0-de24-422a-ac6b-a45d0f8f47f1,Namespace:kube-system,Attempt:0,}" May 9 23:55:23.993834 containerd[1571]: time="2025-05-09T23:55:23.993766912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnsll,Uid:711b69a0-de24-422a-ac6b-a45d0f8f47f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 23:55:23.994516 kubelet[2736]: E0509 23:55:23.994115 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 23:55:23.994516 kubelet[2736]: E0509 23:55:23.994200 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pnsll" May 9 23:55:23.994516 kubelet[2736]: E0509 23:55:23.994221 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pnsll" May 9 23:55:23.994516 kubelet[2736]: E0509 23:55:23.994266 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pnsll_kube-system(711b69a0-de24-422a-ac6b-a45d0f8f47f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pnsll_kube-system(711b69a0-de24-422a-ac6b-a45d0f8f47f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-pnsll" podUID="711b69a0-de24-422a-ac6b-a45d0f8f47f1" May 9 23:55:23.996080 kubelet[2736]: E0509 23:55:23.994894 2736 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 23:55:23.996080 kubelet[2736]: E0509 23:55:23.994935 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vgcr4" May 9 23:55:23.996080 kubelet[2736]: E0509 23:55:23.994950 2736 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-vgcr4" May 9 23:55:23.996080 kubelet[2736]: E0509 23:55:23.995014 2736 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vgcr4_kube-system(7861de09-afb3-4175-a386-6ee1384fd32d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vgcr4_kube-system(7861de09-afb3-4175-a386-6ee1384fd32d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-vgcr4" podUID="7861de09-afb3-4175-a386-6ee1384fd32d" May 9 23:55:23.996255 containerd[1571]: time="2025-05-09T23:55:23.994695757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgcr4,Uid:7861de09-afb3-4175-a386-6ee1384fd32d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 23:55:24.645051 kubelet[2736]: E0509 23:55:24.645010 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:24.746137 systemd[1]: run-netns-cni\x2d91356428\x2d028a\x2d1649\x2d72a5\x2d1c2d23cae2e0.mount: Deactivated successfully. May 9 23:55:24.746302 systemd[1]: run-netns-cni\x2d1f45f55f\x2d3951\x2de40c\x2de256\x2dc0523aed9467.mount: Deactivated successfully. May 9 23:55:24.746396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-912a2bd700371d38aea488fa8ca24f9df8289843da53b5db3eec7e13c7df22e8-shm.mount: Deactivated successfully. May 9 23:55:24.746486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-178ec7afed5a2d915879bda665a9e10ebc2152d34c77ca48883fb09006162146-shm.mount: Deactivated successfully. May 9 23:55:24.804738 systemd-networkd[1230]: flannel.1: Link UP May 9 23:55:24.804743 systemd-networkd[1230]: flannel.1: Gained carrier May 9 23:55:25.646983 kubelet[2736]: E0509 23:55:25.646938 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:26.246203 systemd-networkd[1230]: flannel.1: Gained IPv6LL May 9 23:55:28.219248 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:36742.service - OpenSSH per-connection server daemon (10.0.0.1:36742). May 9 23:55:28.261734 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 36742 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:28.263310 sshd-session[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:28.267791 systemd-logind[1556]: New session 6 of user core. May 9 23:55:28.275299 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:55:28.387937 sshd[3383]: Connection closed by 10.0.0.1 port 36742 May 9 23:55:28.388277 sshd-session[3380]: pam_unix(sshd:session): session closed for user core May 9 23:55:28.391488 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:36742.service: Deactivated successfully. May 9 23:55:28.393555 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. May 9 23:55:28.393728 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:55:28.394576 systemd-logind[1556]: Removed session 6. May 9 23:55:33.408275 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:40064.service - OpenSSH per-connection server daemon (10.0.0.1:40064). May 9 23:55:33.448793 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 40064 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:33.450210 sshd-session[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:33.456427 systemd-logind[1556]: New session 7 of user core. May 9 23:55:33.468326 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:55:33.594418 sshd[3421]: Connection closed by 10.0.0.1 port 40064 May 9 23:55:33.595030 sshd-session[3418]: pam_unix(sshd:session): session closed for user core May 9 23:55:33.599509 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:40064.service: Deactivated successfully. May 9 23:55:33.602041 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. May 9 23:55:33.602106 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:55:33.603213 systemd-logind[1556]: Removed session 7. May 9 23:55:34.587711 kubelet[2736]: E0509 23:55:34.587664 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:34.589504 containerd[1571]: time="2025-05-09T23:55:34.589424518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgcr4,Uid:7861de09-afb3-4175-a386-6ee1384fd32d,Namespace:kube-system,Attempt:0,}" May 9 23:55:34.652661 systemd-networkd[1230]: cni0: Link UP May 9 23:55:34.652668 systemd-networkd[1230]: cni0: Gained carrier May 9 23:55:34.656172 systemd-networkd[1230]: cni0: Lost carrier May 9 23:55:34.657800 systemd-networkd[1230]: veth0d6c7939: Link UP May 9 23:55:34.666419 kernel: cni0: port 1(veth0d6c7939) entered blocking state May 9 23:55:34.666518 kernel: cni0: port 1(veth0d6c7939) entered disabled state May 9 23:55:34.666535 kernel: veth0d6c7939: entered allmulticast mode May 9 23:55:34.666547 kernel: veth0d6c7939: entered promiscuous mode May 9 23:55:34.668581 kernel: cni0: port 1(veth0d6c7939) entered blocking state May 9 23:55:34.668636 kernel: cni0: port 1(veth0d6c7939) entered forwarding state May 9 23:55:34.672401 kernel: cni0: port 1(veth0d6c7939) entered disabled state May 9 23:55:34.687024 kernel: cni0: port 1(veth0d6c7939) entered blocking state May 9 23:55:34.687099 kernel: cni0: port 1(veth0d6c7939) entered forwarding state May 9 23:55:34.687039 systemd-networkd[1230]: veth0d6c7939: Gained carrier May 9 23:55:34.687288 systemd-networkd[1230]: cni0: Gained carrier May 9 23:55:34.691160 containerd[1571]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} May 9 23:55:34.691160 containerd[1571]: delegateAdd: netconf sent to delegate plugin: May 9 23:55:34.712581 containerd[1571]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T23:55:34.712076836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:55:34.712581 containerd[1571]: time="2025-05-09T23:55:34.712544677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:55:34.712581 containerd[1571]: time="2025-05-09T23:55:34.712558717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:34.713674 containerd[1571]: time="2025-05-09T23:55:34.712652438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:34.738992 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:55:34.759811 containerd[1571]: time="2025-05-09T23:55:34.759662950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vgcr4,Uid:7861de09-afb3-4175-a386-6ee1384fd32d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc61735dd7e028c1d3fd37f3868107ce7c1b15999bf24768eca8a23591238c0b\"" May 9 23:55:34.760843 kubelet[2736]: E0509 23:55:34.760795 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:34.764643 containerd[1571]: time="2025-05-09T23:55:34.764507486Z" level=info msg="CreateContainer within sandbox \"dc61735dd7e028c1d3fd37f3868107ce7c1b15999bf24768eca8a23591238c0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:55:34.792289 containerd[1571]: time="2025-05-09T23:55:34.792228776Z" level=info msg="CreateContainer within sandbox \"dc61735dd7e028c1d3fd37f3868107ce7c1b15999bf24768eca8a23591238c0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45195f1b16ad3632fb96294aabb4a1fa5838b9332f2feb7ca8126642226fa244\"" May 9 23:55:34.794440 containerd[1571]: time="2025-05-09T23:55:34.794193582Z" level=info msg="StartContainer for \"45195f1b16ad3632fb96294aabb4a1fa5838b9332f2feb7ca8126642226fa244\"" May 9 23:55:34.854406 containerd[1571]: time="2025-05-09T23:55:34.854183937Z" level=info msg="StartContainer for \"45195f1b16ad3632fb96294aabb4a1fa5838b9332f2feb7ca8126642226fa244\" returns successfully" May 9 23:55:35.666708 kubelet[2736]: E0509 23:55:35.666670 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:35.677798 kubelet[2736]: I0509 23:55:35.677472 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vgcr4" podStartSLOduration=16.677454004 podStartE2EDuration="16.677454004s" podCreationTimestamp="2025-05-09 23:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:35.67635492 +0000 UTC m=+32.173718061" watchObservedRunningTime="2025-05-09 23:55:35.677454004 +0000 UTC m=+32.174817145" May 9 23:55:35.677964 kubelet[2736]: I0509 23:55:35.677814 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mcj8b" podStartSLOduration=13.128674251 podStartE2EDuration="16.677806925s" podCreationTimestamp="2025-05-09 23:55:19 +0000 UTC" firstStartedPulling="2025-05-09 23:55:19.805560443 +0000 UTC m=+16.302923584" lastFinishedPulling="2025-05-09 23:55:23.354693117 +0000 UTC m=+19.852056258" observedRunningTime="2025-05-09 23:55:24.656659591 +0000 UTC m=+21.154022732" watchObservedRunningTime="2025-05-09 23:55:35.677806925 +0000 UTC m=+32.175170066" May 9 23:55:35.913552 systemd-networkd[1230]: cni0: Gained IPv6LL May 9 23:55:36.230099 systemd-networkd[1230]: veth0d6c7939: Gained IPv6LL May 9 23:55:36.588013 kubelet[2736]: E0509 23:55:36.587751 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:36.588278 containerd[1571]: time="2025-05-09T23:55:36.588154459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnsll,Uid:711b69a0-de24-422a-ac6b-a45d0f8f47f1,Namespace:kube-system,Attempt:0,}" May 9 23:55:36.609306 systemd-networkd[1230]: vethf92666b3: Link UP May 9 23:55:36.611168 kernel: cni0: port 2(vethf92666b3) entered blocking state May 9 23:55:36.611233 kernel: cni0: port 2(vethf92666b3) entered disabled state May 9 23:55:36.611256 kernel: vethf92666b3: entered allmulticast mode May 9 23:55:36.612022 kernel: vethf92666b3: entered promiscuous mode May 9 23:55:36.618899 systemd-networkd[1230]: vethf92666b3: Gained carrier May 9 23:55:36.619046 kernel: cni0: port 2(vethf92666b3) entered blocking state May 9 23:55:36.619077 kernel: cni0: port 2(vethf92666b3) entered forwarding state May 9 23:55:36.622507 containerd[1571]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 9 23:55:36.622507 containerd[1571]: delegateAdd: netconf sent to delegate plugin: May 9 23:55:36.638887 containerd[1571]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T23:55:36.638776212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:55:36.638887 containerd[1571]: time="2025-05-09T23:55:36.638854892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:55:36.639185 containerd[1571]: time="2025-05-09T23:55:36.639129733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:36.639824 containerd[1571]: time="2025-05-09T23:55:36.639772095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:55:36.661713 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:55:36.670108 kubelet[2736]: E0509 23:55:36.670083 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:36.682626 containerd[1571]: time="2025-05-09T23:55:36.682573383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnsll,Uid:711b69a0-de24-422a-ac6b-a45d0f8f47f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f1995cb4525b55d06f881fc306ee896d97006db6db8a9b33da976f9569f74ae\"" May 9 23:55:36.683454 kubelet[2736]: E0509 23:55:36.683417 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:36.687706 containerd[1571]: time="2025-05-09T23:55:36.687655719Z" level=info msg="CreateContainer within sandbox \"9f1995cb4525b55d06f881fc306ee896d97006db6db8a9b33da976f9569f74ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:55:36.707653 containerd[1571]: time="2025-05-09T23:55:36.707534178Z" level=info msg="CreateContainer within sandbox \"9f1995cb4525b55d06f881fc306ee896d97006db6db8a9b33da976f9569f74ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1b28ed108acf88003bc23cb914ad66ec3aed48fa79c5e1d2679c17541462ee6\"" May 9 23:55:36.708369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629039408.mount: Deactivated successfully. May 9 23:55:36.709397 containerd[1571]: time="2025-05-09T23:55:36.709358504Z" level=info msg="StartContainer for \"a1b28ed108acf88003bc23cb914ad66ec3aed48fa79c5e1d2679c17541462ee6\"" May 9 23:55:36.727920 kernel: hrtimer: interrupt took 1563445 ns May 9 23:55:36.761445 containerd[1571]: time="2025-05-09T23:55:36.761386300Z" level=info msg="StartContainer for \"a1b28ed108acf88003bc23cb914ad66ec3aed48fa79c5e1d2679c17541462ee6\" returns successfully" May 9 23:55:37.674837 kubelet[2736]: E0509 23:55:37.673685 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:37.684765 kubelet[2736]: I0509 23:55:37.684531 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pnsll" podStartSLOduration=18.684514284 podStartE2EDuration="18.684514284s" podCreationTimestamp="2025-05-09 23:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:55:37.684142122 +0000 UTC m=+34.181505263" watchObservedRunningTime="2025-05-09 23:55:37.684514284 +0000 UTC m=+34.181877385" May 9 23:55:38.150138 systemd-networkd[1230]: vethf92666b3: Gained IPv6LL May 9 23:55:38.609259 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:40074.service - OpenSSH per-connection server daemon (10.0.0.1:40074). May 9 23:55:38.654772 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 40074 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:38.654047 sshd-session[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:38.659914 systemd-logind[1556]: New session 8 of user core. May 9 23:55:38.666237 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:55:38.675594 kubelet[2736]: E0509 23:55:38.675562 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:38.780707 sshd[3696]: Connection closed by 10.0.0.1 port 40074 May 9 23:55:38.781067 sshd-session[3693]: pam_unix(sshd:session): session closed for user core May 9 23:55:38.791227 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:40086.service - OpenSSH per-connection server daemon (10.0.0.1:40086). May 9 23:55:38.791626 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:40074.service: Deactivated successfully. May 9 23:55:38.795101 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:55:38.795145 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. May 9 23:55:38.796596 systemd-logind[1556]: Removed session 8. May 9 23:55:38.830142 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 40086 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:38.831522 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:38.835521 systemd-logind[1556]: New session 9 of user core. May 9 23:55:38.852276 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:55:38.997719 sshd[3712]: Connection closed by 10.0.0.1 port 40086 May 9 23:55:38.998887 sshd-session[3706]: pam_unix(sshd:session): session closed for user core May 9 23:55:39.009305 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:40102.service - OpenSSH per-connection server daemon (10.0.0.1:40102). May 9 23:55:39.009899 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:40086.service: Deactivated successfully. May 9 23:55:39.018547 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:55:39.021645 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. May 9 23:55:39.024823 systemd-logind[1556]: Removed session 9. May 9 23:55:39.058704 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 40102 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:39.060209 sshd-session[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:39.065079 systemd-logind[1556]: New session 10 of user core. May 9 23:55:39.077284 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:55:39.187006 sshd[3725]: Connection closed by 10.0.0.1 port 40102 May 9 23:55:39.187360 sshd-session[3719]: pam_unix(sshd:session): session closed for user core May 9 23:55:39.190864 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:40102.service: Deactivated successfully. May 9 23:55:39.193073 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. May 9 23:55:39.193167 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:55:39.195226 systemd-logind[1556]: Removed session 10. May 9 23:55:39.677185 kubelet[2736]: E0509 23:55:39.677157 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:55:44.202255 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:37960.service - OpenSSH per-connection server daemon (10.0.0.1:37960). May 9 23:55:44.245131 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 37960 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:44.248856 sshd-session[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:44.256299 systemd-logind[1556]: New session 11 of user core. May 9 23:55:44.264362 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:55:44.373276 sshd[3762]: Connection closed by 10.0.0.1 port 37960 May 9 23:55:44.373838 sshd-session[3759]: pam_unix(sshd:session): session closed for user core May 9 23:55:44.387377 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:37962.service - OpenSSH per-connection server daemon (10.0.0.1:37962). May 9 23:55:44.387803 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:37960.service: Deactivated successfully. May 9 23:55:44.390788 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:55:44.391393 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. May 9 23:55:44.392576 systemd-logind[1556]: Removed session 11. May 9 23:55:44.429322 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 37962 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:44.430595 sshd-session[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:44.435048 systemd-logind[1556]: New session 12 of user core. May 9 23:55:44.447371 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:55:44.688346 sshd[3777]: Connection closed by 10.0.0.1 port 37962 May 9 23:55:44.688819 sshd-session[3771]: pam_unix(sshd:session): session closed for user core May 9 23:55:44.697345 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). May 9 23:55:44.697778 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:37962.service: Deactivated successfully. May 9 23:55:44.699709 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:55:44.703194 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. May 9 23:55:44.704071 systemd-logind[1556]: Removed session 12. May 9 23:55:44.737400 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:44.738799 sshd-session[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:44.742819 systemd-logind[1556]: New session 13 of user core. May 9 23:55:44.751223 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:55:45.972549 sshd[3791]: Connection closed by 10.0.0.1 port 37966 May 9 23:55:45.973085 sshd-session[3785]: pam_unix(sshd:session): session closed for user core May 9 23:55:45.980076 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:37970.service - OpenSSH per-connection server daemon (10.0.0.1:37970). May 9 23:55:45.982031 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:37966.service: Deactivated successfully. May 9 23:55:45.985145 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:55:45.985475 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. May 9 23:55:45.990240 systemd-logind[1556]: Removed session 13. May 9 23:55:46.032946 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 37970 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:46.034255 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:46.039066 systemd-logind[1556]: New session 14 of user core. May 9 23:55:46.054670 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:55:46.260997 sshd[3832]: Connection closed by 10.0.0.1 port 37970 May 9 23:55:46.261521 sshd-session[3826]: pam_unix(sshd:session): session closed for user core May 9 23:55:46.269261 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:37982.service - OpenSSH per-connection server daemon (10.0.0.1:37982). May 9 23:55:46.269665 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:37970.service: Deactivated successfully. May 9 23:55:46.272697 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:55:46.273030 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. May 9 23:55:46.275419 systemd-logind[1556]: Removed session 14. May 9 23:55:46.311094 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 37982 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:46.312416 sshd-session[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:46.316331 systemd-logind[1556]: New session 15 of user core. May 9 23:55:46.324309 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:55:46.430735 sshd[3845]: Connection closed by 10.0.0.1 port 37982 May 9 23:55:46.431241 sshd-session[3839]: pam_unix(sshd:session): session closed for user core May 9 23:55:46.434966 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:37982.service: Deactivated successfully. May 9 23:55:46.438734 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:55:46.439476 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. May 9 23:55:46.440382 systemd-logind[1556]: Removed session 15. May 9 23:55:51.451269 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:37998.service - OpenSSH per-connection server daemon (10.0.0.1:37998). May 9 23:55:51.497760 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 37998 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:51.498289 sshd-session[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:51.502817 systemd-logind[1556]: New session 16 of user core. May 9 23:55:51.519584 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 23:55:51.630728 sshd[3886]: Connection closed by 10.0.0.1 port 37998 May 9 23:55:51.631091 sshd-session[3883]: pam_unix(sshd:session): session closed for user core May 9 23:55:51.634722 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:37998.service: Deactivated successfully. May 9 23:55:51.637731 systemd[1]: session-16.scope: Deactivated successfully. May 9 23:55:51.641397 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. May 9 23:55:51.645698 systemd-logind[1556]: Removed session 16. May 9 23:55:56.643241 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:34078.service - OpenSSH per-connection server daemon (10.0.0.1:34078). May 9 23:55:56.681602 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 34078 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:55:56.682910 sshd-session[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:55:56.686544 systemd-logind[1556]: New session 17 of user core. May 9 23:55:56.692246 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 23:55:56.801953 sshd[3922]: Connection closed by 10.0.0.1 port 34078 May 9 23:55:56.802458 sshd-session[3919]: pam_unix(sshd:session): session closed for user core May 9 23:55:56.806050 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. May 9 23:55:56.806212 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:34078.service: Deactivated successfully. May 9 23:55:56.809514 systemd[1]: session-17.scope: Deactivated successfully. May 9 23:55:56.814475 systemd-logind[1556]: Removed session 17. May 9 23:56:01.813414 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). May 9 23:56:01.853769 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:56:01.855148 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:56:01.859785 systemd-logind[1556]: New session 18 of user core. May 9 23:56:01.870288 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 23:56:01.998641 sshd[3960]: Connection closed by 10.0.0.1 port 34088 May 9 23:56:01.999213 sshd-session[3957]: pam_unix(sshd:session): session closed for user core May 9 23:56:02.005869 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:34088.service: Deactivated successfully. May 9 23:56:02.008123 systemd[1]: session-18.scope: Deactivated successfully. May 9 23:56:02.008841 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. May 9 23:56:02.009750 systemd-logind[1556]: Removed session 18.