May 13 00:30:20.913561 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:30:20.913581 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:30:20.913591 kernel: KASLR enabled May 13 00:30:20.913597 kernel: efi: EFI v2.7 by EDK II May 13 00:30:20.913603 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:30:20.913609 kernel: random: crng init done May 13 00:30:20.913617 kernel: ACPI: Early table checksum verification disabled May 13 00:30:20.913623 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:30:20.913630 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:30:20.913637 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913644 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913650 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913656 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913663 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913671 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913678 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913685 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913692 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:30:20.913699 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:30:20.913705 kernel: NUMA: Failed to initialise from firmware May 13 00:30:20.913712 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:30:20.913719 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:30:20.913726 kernel: Zone ranges: May 13 00:30:20.913732 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:30:20.913739 kernel: DMA32 empty May 13 00:30:20.913746 kernel: Normal empty May 13 00:30:20.913753 kernel: Movable zone start for each node May 13 00:30:20.913759 kernel: Early memory node ranges May 13 00:30:20.913766 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:30:20.913773 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:30:20.913780 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:30:20.913786 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:30:20.913793 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:30:20.913799 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:30:20.913806 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:30:20.913813 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:30:20.913819 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:30:20.913827 kernel: psci: probing for conduit method from ACPI. May 13 00:30:20.913863 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:30:20.913877 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:30:20.913888 kernel: psci: Trusted OS migration not required May 13 00:30:20.913895 kernel: psci: SMC Calling Convention v1.1 May 13 00:30:20.913902 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:30:20.913910 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:30:20.913918 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:30:20.913925 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:30:20.913932 kernel: Detected PIPT I-cache on CPU0 May 13 00:30:20.913939 kernel: CPU features: detected: GIC system register CPU interface May 13 00:30:20.913946 kernel: CPU features: detected: Hardware dirty bit management May 13 00:30:20.913953 kernel: CPU features: detected: Spectre-v4 May 13 00:30:20.913960 kernel: CPU features: detected: Spectre-BHB May 13 00:30:20.913967 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:30:20.913975 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:30:20.913983 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:30:20.913990 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:30:20.913997 kernel: alternatives: applying boot alternatives May 13 00:30:20.914005 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:30:20.914013 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:30:20.914020 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:30:20.914027 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:30:20.914034 kernel: Fallback order for Node 0: 0 May 13 00:30:20.914041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:30:20.914048 kernel: Policy zone: DMA May 13 00:30:20.914056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:30:20.914064 kernel: software IO TLB: area num 4. May 13 00:30:20.914071 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:30:20.914079 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 13 00:30:20.914086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:30:20.914096 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:30:20.914104 kernel: rcu: RCU event tracing is enabled. May 13 00:30:20.914111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:30:20.914118 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:30:20.914126 kernel: Tracing variant of Tasks RCU enabled. May 13 00:30:20.914133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:30:20.914140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:30:20.914147 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:30:20.914155 kernel: GICv3: 256 SPIs implemented May 13 00:30:20.914162 kernel: GICv3: 0 Extended SPIs implemented May 13 00:30:20.914169 kernel: Root IRQ handler: gic_handle_irq May 13 00:30:20.914177 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:30:20.914184 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:30:20.914191 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:30:20.914198 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:30:20.914206 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:30:20.914213 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:30:20.914220 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:30:20.914227 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:30:20.914235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:30:20.914242 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:30:20.914250 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:30:20.914257 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:30:20.914264 kernel: arm-pv: using stolen time PV May 13 00:30:20.914272 kernel: Console: colour dummy device 80x25 May 13 00:30:20.914279 kernel: ACPI: Core revision 20230628 May 13 00:30:20.914286 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:30:20.914294 kernel: pid_max: default: 32768 minimum: 301 May 13 00:30:20.914301 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:30:20.914309 kernel: landlock: Up and running. May 13 00:30:20.914316 kernel: SELinux: Initializing. May 13 00:30:20.914324 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:30:20.914331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:30:20.914338 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:30:20.914347 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:30:20.914354 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:30:20.914361 kernel: rcu: Hierarchical SRCU implementation. May 13 00:30:20.914369 kernel: rcu: Max phase no-delay instances is 400. May 13 00:30:20.914377 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:30:20.914384 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:30:20.914392 kernel: Remapping and enabling EFI services. May 13 00:30:20.914399 kernel: smp: Bringing up secondary CPUs ... May 13 00:30:20.914406 kernel: Detected PIPT I-cache on CPU1 May 13 00:30:20.914414 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:30:20.914421 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:30:20.914429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:30:20.914436 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:30:20.914444 kernel: Detected PIPT I-cache on CPU2 May 13 00:30:20.914452 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:30:20.914459 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:30:20.914471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:30:20.914479 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:30:20.914487 kernel: Detected PIPT I-cache on CPU3 May 13 00:30:20.914495 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:30:20.914502 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:30:20.914510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:30:20.914518 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:30:20.914525 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:30:20.914534 kernel: SMP: Total of 4 processors activated. May 13 00:30:20.914542 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:30:20.914550 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:30:20.914558 kernel: CPU features: detected: Common not Private translations May 13 00:30:20.914568 kernel: CPU features: detected: CRC32 instructions May 13 00:30:20.914576 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:30:20.914585 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:30:20.914592 kernel: CPU features: detected: LSE atomic instructions May 13 00:30:20.914600 kernel: CPU features: detected: Privileged Access Never May 13 00:30:20.914608 kernel: CPU features: detected: RAS Extension Support May 13 00:30:20.914615 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:30:20.914623 kernel: CPU: All CPU(s) started at EL1 May 13 00:30:20.914630 kernel: alternatives: applying system-wide alternatives May 13 00:30:20.914638 kernel: devtmpfs: initialized May 13 00:30:20.914646 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:30:20.914654 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:30:20.914663 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:30:20.914670 kernel: SMBIOS 3.0.0 present. May 13 00:30:20.914678 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:30:20.914685 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:30:20.914693 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:30:20.914700 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:30:20.914709 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:30:20.914716 kernel: audit: initializing netlink subsys (disabled) May 13 00:30:20.914725 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:30:20.914733 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:30:20.914741 kernel: cpuidle: using governor menu May 13 00:30:20.914748 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:30:20.914756 kernel: ASID allocator initialised with 32768 entries May 13 00:30:20.914764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:30:20.914772 kernel: Serial: AMBA PL011 UART driver May 13 00:30:20.914779 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:30:20.914787 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:30:20.914795 kernel: Modules: 509008 pages in range for PLT usage May 13 00:30:20.914804 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:30:20.914812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:30:20.914819 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:30:20.914827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:30:20.914841 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:30:20.914850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:30:20.914858 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:30:20.914866 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:30:20.914877 kernel: ACPI: Added _OSI(Module Device) May 13 00:30:20.914886 kernel: ACPI: Added _OSI(Processor Device) May 13 00:30:20.914894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:30:20.914901 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:30:20.914909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:30:20.914917 kernel: ACPI: Interpreter enabled May 13 00:30:20.914925 kernel: ACPI: Using GIC for interrupt routing May 13 00:30:20.914932 kernel: ACPI: MCFG table detected, 1 entries May 13 00:30:20.914940 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:30:20.914947 kernel: printk: console [ttyAMA0] enabled May 13 00:30:20.914956 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:30:20.915232 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:30:20.915315 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:30:20.915382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:30:20.915449 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:30:20.915515 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:30:20.915525 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:30:20.915536 kernel: PCI host bridge to bus 0000:00 May 13 00:30:20.915608 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:30:20.915671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:30:20.915733 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:30:20.915793 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:30:20.915970 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:30:20.916055 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:30:20.916128 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:30:20.916195 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:30:20.916270 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:30:20.916353 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:30:20.916422 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:30:20.916489 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:30:20.916556 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:30:20.916617 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:30:20.916676 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:30:20.916686 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:30:20.916694 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:30:20.916702 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:30:20.916709 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:30:20.916717 kernel: iommu: Default domain type: Translated May 13 00:30:20.916725 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:30:20.916735 kernel: efivars: Registered efivars operations May 13 00:30:20.916742 kernel: vgaarb: loaded May 13 00:30:20.916750 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:30:20.916758 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:30:20.916766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:30:20.916773 kernel: pnp: PnP ACPI init May 13 00:30:20.916876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:30:20.916890 kernel: pnp: PnP ACPI: found 1 devices May 13 00:30:20.916901 kernel: NET: Registered PF_INET protocol family May 13 00:30:20.916909 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:30:20.916917 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:30:20.916926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:30:20.916934 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:30:20.916942 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:30:20.916953 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:30:20.916962 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:30:20.916970 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:30:20.916980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:30:20.916988 kernel: PCI: CLS 0 bytes, default 64 May 13 00:30:20.916995 kernel: kvm [1]: HYP mode not available May 13 00:30:20.917005 kernel: Initialise system trusted keyrings May 13 00:30:20.917013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:30:20.917021 kernel: Key type asymmetric registered May 13 00:30:20.917031 kernel: Asymmetric key parser 'x509' registered May 13 00:30:20.917042 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:30:20.917050 kernel: io scheduler mq-deadline registered May 13 00:30:20.917059 kernel: io scheduler kyber registered May 13 00:30:20.917067 kernel: io scheduler bfq registered May 13 00:30:20.917075 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:30:20.917083 kernel: ACPI: button: Power Button [PWRB] May 13 00:30:20.917091 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:30:20.917187 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:30:20.917198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:30:20.917206 kernel: thunder_xcv, ver 1.0 May 13 00:30:20.917213 kernel: thunder_bgx, ver 1.0 May 13 00:30:20.917223 kernel: nicpf, ver 1.0 May 13 00:30:20.917230 kernel: nicvf, ver 1.0 May 13 00:30:20.917317 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:30:20.917383 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:30:20 UTC (1747096220) May 13 00:30:20.917393 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:30:20.917401 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:30:20.917409 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:30:20.917417 kernel: watchdog: Hard watchdog permanently disabled May 13 00:30:20.917427 kernel: NET: Registered PF_INET6 protocol family May 13 00:30:20.917434 kernel: Segment Routing with IPv6 May 13 00:30:20.917442 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:30:20.917450 kernel: NET: Registered PF_PACKET protocol family May 13 00:30:20.917457 kernel: Key type dns_resolver registered May 13 00:30:20.917465 kernel: registered taskstats version 1 May 13 00:30:20.917473 kernel: Loading compiled-in X.509 certificates May 13 00:30:20.917481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:30:20.917488 kernel: Key type .fscrypt registered May 13 00:30:20.917497 kernel: Key type fscrypt-provisioning registered May 13 00:30:20.917505 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:30:20.917513 kernel: ima: Allocated hash algorithm: sha1 May 13 00:30:20.917520 kernel: ima: No architecture policies found May 13 00:30:20.917528 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:30:20.917536 kernel: clk: Disabling unused clocks May 13 00:30:20.917544 kernel: Freeing unused kernel memory: 39424K May 13 00:30:20.917552 kernel: Run /init as init process May 13 00:30:20.917559 kernel: with arguments: May 13 00:30:20.917569 kernel: /init May 13 00:30:20.917576 kernel: with environment: May 13 00:30:20.917583 kernel: HOME=/ May 13 00:30:20.917591 kernel: TERM=linux May 13 00:30:20.917599 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:30:20.917608 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:30:20.917619 systemd[1]: Detected virtualization kvm. May 13 00:30:20.917628 systemd[1]: Detected architecture arm64. May 13 00:30:20.917637 systemd[1]: Running in initrd. May 13 00:30:20.917645 systemd[1]: No hostname configured, using default hostname. May 13 00:30:20.917653 systemd[1]: Hostname set to . May 13 00:30:20.917661 systemd[1]: Initializing machine ID from VM UUID. May 13 00:30:20.917670 systemd[1]: Queued start job for default target initrd.target. May 13 00:30:20.917678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:30:20.917686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:30:20.917697 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:30:20.917705 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:30:20.917714 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:30:20.917722 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:30:20.917732 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:30:20.917741 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:30:20.917750 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:30:20.917759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:30:20.917768 systemd[1]: Reached target paths.target - Path Units. May 13 00:30:20.917776 systemd[1]: Reached target slices.target - Slice Units. May 13 00:30:20.917784 systemd[1]: Reached target swap.target - Swaps. May 13 00:30:20.917792 systemd[1]: Reached target timers.target - Timer Units. May 13 00:30:20.917801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:30:20.917809 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:30:20.917817 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:30:20.917826 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:30:20.917846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:30:20.917855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:30:20.917863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:30:20.917877 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:30:20.917886 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:30:20.917894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:30:20.917903 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:30:20.917911 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:30:20.917921 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:30:20.917930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:30:20.917938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:30:20.917946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:30:20.917954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:30:20.917963 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:30:20.917973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:30:20.917981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:30:20.918008 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:30:20.918030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:30:20.918039 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:30:20.918047 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:30:20.918056 systemd-journald[238]: Journal started May 13 00:30:20.918075 systemd-journald[238]: Runtime Journal (/run/log/journal/3754086f6c16437897f4f3b350203923) is 5.9M, max 47.3M, 41.4M free. May 13 00:30:20.904416 systemd-modules-load[239]: Inserted module 'overlay' May 13 00:30:20.921704 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 00:30:20.923050 kernel: Bridge firewalling registered May 13 00:30:20.923068 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:30:20.924015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:30:20.931954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:30:20.933404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:30:20.937318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:30:20.938453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:30:20.939531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:30:20.942359 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:30:20.945170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:30:20.950332 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:30:20.952858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:30:20.956831 dracut-cmdline[272]: dracut-dracut-053 May 13 00:30:20.960064 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:30:20.980263 systemd-resolved[280]: Positive Trust Anchors: May 13 00:30:20.980280 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:30:20.980312 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:30:20.984953 systemd-resolved[280]: Defaulting to hostname 'linux'. May 13 00:30:20.985959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:30:20.988795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:30:21.026858 kernel: SCSI subsystem initialized May 13 00:30:21.030856 kernel: Loading iSCSI transport class v2.0-870. May 13 00:30:21.037858 kernel: iscsi: registered transport (tcp) May 13 00:30:21.050852 kernel: iscsi: registered transport (qla4xxx) May 13 00:30:21.050875 kernel: QLogic iSCSI HBA Driver May 13 00:30:21.091630 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:30:21.103062 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:30:21.118474 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:30:21.118563 kernel: device-mapper: uevent: version 1.0.3 May 13 00:30:21.118606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:30:21.166869 kernel: raid6: neonx8 gen() 15786 MB/s May 13 00:30:21.183861 kernel: raid6: neonx4 gen() 15657 MB/s May 13 00:30:21.200862 kernel: raid6: neonx2 gen() 13234 MB/s May 13 00:30:21.217862 kernel: raid6: neonx1 gen() 10486 MB/s May 13 00:30:21.234861 kernel: raid6: int64x8 gen() 6953 MB/s May 13 00:30:21.251861 kernel: raid6: int64x4 gen() 7352 MB/s May 13 00:30:21.268852 kernel: raid6: int64x2 gen() 6128 MB/s May 13 00:30:21.285860 kernel: raid6: int64x1 gen() 5055 MB/s May 13 00:30:21.285901 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s May 13 00:30:21.302864 kernel: raid6: .... xor() 11928 MB/s, rmw enabled May 13 00:30:21.302880 kernel: raid6: using neon recovery algorithm May 13 00:30:21.307854 kernel: xor: measuring software checksum speed May 13 00:30:21.307876 kernel: 8regs : 19797 MB/sec May 13 00:30:21.309247 kernel: 32regs : 17850 MB/sec May 13 00:30:21.309260 kernel: arm64_neon : 27052 MB/sec May 13 00:30:21.309270 kernel: xor: using function: arm64_neon (27052 MB/sec) May 13 00:30:21.358866 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:30:21.369679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:30:21.378070 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:30:21.391171 systemd-udevd[458]: Using default interface naming scheme 'v255'. May 13 00:30:21.394364 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:30:21.401000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:30:21.412216 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 13 00:30:21.437772 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:30:21.449026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:30:21.489822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:30:21.496227 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:30:21.508560 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:30:21.510067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:30:21.512921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:30:21.513731 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:30:21.529000 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:30:21.538957 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:30:21.539166 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:30:21.541076 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:30:21.541761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:30:21.548646 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:30:21.549754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:30:21.554399 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:30:21.554420 kernel: GPT:9289727 != 19775487 May 13 00:30:21.554431 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:30:21.554446 kernel: GPT:9289727 != 19775487 May 13 00:30:21.549903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:30:21.557696 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:30:21.557713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:30:21.552616 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:30:21.568884 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (515) May 13 00:30:21.568799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:30:21.572860 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) May 13 00:30:21.572876 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:30:21.578050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:30:21.589465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:30:21.597750 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:30:21.602409 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:30:21.603301 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:30:21.609176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:30:21.622054 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:30:21.627032 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:30:21.631708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:30:21.631907 disk-uuid[547]: Primary Header is updated. May 13 00:30:21.631907 disk-uuid[547]: Secondary Entries is updated. May 13 00:30:21.631907 disk-uuid[547]: Secondary Header is updated. May 13 00:30:21.654133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:30:22.649053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:30:22.649669 disk-uuid[548]: The operation has completed successfully. May 13 00:30:22.666172 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:30:22.666264 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:30:22.690002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:30:22.692878 sh[566]: Success May 13 00:30:22.703887 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:30:22.732006 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:30:22.742106 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:30:22.743780 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:30:22.753930 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:30:22.753965 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:30:22.753977 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:30:22.755225 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:30:22.755238 kernel: BTRFS info (device dm-0): using free space tree May 13 00:30:22.759565 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:30:22.760568 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:30:22.766991 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:30:22.768458 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:30:22.775953 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:30:22.775990 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:30:22.776006 kernel: BTRFS info (device vda6): using free space tree May 13 00:30:22.777908 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:30:22.784803 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:30:22.786619 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:30:22.791241 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:30:22.798977 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:30:22.858401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:30:22.870013 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:30:22.890778 ignition[660]: Ignition 2.19.0 May 13 00:30:22.890788 ignition[660]: Stage: fetch-offline May 13 00:30:22.891534 systemd-networkd[761]: lo: Link UP May 13 00:30:22.890819 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 13 00:30:22.891537 systemd-networkd[761]: lo: Gained carrier May 13 00:30:22.890828 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:22.892190 systemd-networkd[761]: Enumeration completed May 13 00:30:22.891056 ignition[660]: parsed url from cmdline: "" May 13 00:30:22.892574 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:30:22.891059 ignition[660]: no config URL provided May 13 00:30:22.892577 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:30:22.891064 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:30:22.893342 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:30:22.891071 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 13 00:30:22.893364 systemd-networkd[761]: eth0: Link UP May 13 00:30:22.891092 ignition[660]: op(1): [started] loading QEMU firmware config module May 13 00:30:22.893367 systemd-networkd[761]: eth0: Gained carrier May 13 00:30:22.891097 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:30:22.893374 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:30:22.899610 ignition[660]: op(1): [finished] loading QEMU firmware config module May 13 00:30:22.894523 systemd[1]: Reached target network.target - Network. May 13 00:30:22.917884 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:30:22.927084 ignition[660]: parsing config with SHA512: 7c67c0bfb801c35b29e8e8034a29cb1f3bdd017dadfc9e5b80adb1f884fb173a82ddcd14da2a1d62e8e37fe6449b6a91683887b9dd4b942c5ff3219c3c427c1f May 13 00:30:22.931823 unknown[660]: fetched base config from "system" May 13 00:30:22.931852 unknown[660]: fetched user config from "qemu" May 13 00:30:22.932402 ignition[660]: fetch-offline: fetch-offline passed May 13 00:30:22.932481 ignition[660]: Ignition finished successfully May 13 00:30:22.933891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:30:22.935579 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:30:22.944962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:30:22.956098 ignition[769]: Ignition 2.19.0 May 13 00:30:22.956112 ignition[769]: Stage: kargs May 13 00:30:22.956281 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 13 00:30:22.956291 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:22.957142 ignition[769]: kargs: kargs passed May 13 00:30:22.957185 ignition[769]: Ignition finished successfully May 13 00:30:22.959128 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:30:22.967041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:30:22.977063 ignition[777]: Ignition 2.19.0 May 13 00:30:22.977072 ignition[777]: Stage: disks May 13 00:30:22.977224 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 13 00:30:22.977234 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:22.979678 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:30:22.978061 ignition[777]: disks: disks passed May 13 00:30:22.981369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:30:22.978103 ignition[777]: Ignition finished successfully May 13 00:30:22.982869 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:30:22.984415 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:30:22.985903 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:30:22.987280 systemd[1]: Reached target basic.target - Basic System. May 13 00:30:23.000963 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:30:23.011721 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:30:23.014825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:30:23.017385 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:30:23.059865 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:30:23.060300 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:30:23.061484 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:30:23.070934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:30:23.072519 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:30:23.073610 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:30:23.073648 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:30:23.083433 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795) May 13 00:30:23.083462 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:30:23.083474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:30:23.083484 kernel: BTRFS info (device vda6): using free space tree May 13 00:30:23.083494 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:30:23.073670 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:30:23.081167 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:30:23.084974 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:30:23.086880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:30:23.130278 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:30:23.133724 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory May 13 00:30:23.136879 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:30:23.139904 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:30:23.217602 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:30:23.224954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:30:23.226506 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:30:23.232852 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:30:23.247858 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:30:23.249928 ignition[909]: INFO : Ignition 2.19.0 May 13 00:30:23.249928 ignition[909]: INFO : Stage: mount May 13 00:30:23.249928 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:30:23.249928 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:23.255070 ignition[909]: INFO : mount: mount passed May 13 00:30:23.255070 ignition[909]: INFO : Ignition finished successfully May 13 00:30:23.252895 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:30:23.263939 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:30:23.753186 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:30:23.762058 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:30:23.766855 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) May 13 00:30:23.769058 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:30:23.769107 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:30:23.769119 kernel: BTRFS info (device vda6): using free space tree May 13 00:30:23.770869 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:30:23.771930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:30:23.787347 ignition[939]: INFO : Ignition 2.19.0 May 13 00:30:23.787347 ignition[939]: INFO : Stage: files May 13 00:30:23.788542 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:30:23.788542 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:23.788542 ignition[939]: DEBUG : files: compiled without relabeling support, skipping May 13 00:30:23.792074 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:30:23.792074 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:30:23.792074 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:30:23.792074 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:30:23.792074 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:30:23.791618 unknown[939]: wrote ssh authorized keys file for user: core May 13 00:30:23.799447 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:30:23.799447 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:30:23.905953 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:30:24.049993 systemd-networkd[761]: eth0: Gained IPv6LL May 13 00:30:24.077393 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:30:24.079256 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:30:24.079256 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:30:24.079256 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:30:24.079256 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:30:24.085959 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 00:30:24.457314 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:30:24.837932 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:30:24.837932 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:30:24.841505 ignition[939]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:30:24.862276 ignition[939]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:30:24.866223 ignition[939]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:30:24.867646 ignition[939]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:30:24.867646 ignition[939]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:30:24.867646 ignition[939]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:30:24.867646 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:30:24.867646 ignition[939]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:30:24.867646 ignition[939]: INFO : files: files passed May 13 00:30:24.867646 ignition[939]: INFO : Ignition finished successfully May 13 00:30:24.868362 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:30:24.883963 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:30:24.886433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:30:24.890596 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:30:24.890686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:30:24.893616 initrd-setup-root-after-ignition[966]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:30:24.895769 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:30:24.895769 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:30:24.898054 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:30:24.900012 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:30:24.901538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:30:24.913180 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:30:24.930240 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:30:24.930337 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:30:24.932419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:30:24.934220 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:30:24.935929 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:30:24.936649 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:30:24.952008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:30:24.965964 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:30:24.973509 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:30:24.974691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:30:24.976681 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:30:24.978394 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:30:24.978501 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:30:24.980897 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:30:24.982899 systemd[1]: Stopped target basic.target - Basic System. May 13 00:30:24.984491 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:30:24.986115 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:30:24.987915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:30:24.989844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:30:24.991644 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:30:24.993516 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:30:24.995411 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:30:24.997055 systemd[1]: Stopped target swap.target - Swaps. May 13 00:30:24.998511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:30:24.998621 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:30:25.000800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:30:25.002712 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:30:25.004563 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:30:25.007933 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:30:25.009188 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:30:25.009299 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:30:25.011982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:30:25.012097 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:30:25.014054 systemd[1]: Stopped target paths.target - Path Units. May 13 00:30:25.015576 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:30:25.019944 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:30:25.021245 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:30:25.023283 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:30:25.024739 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:30:25.024824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:30:25.026345 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:30:25.026426 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:30:25.027887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:30:25.027995 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:30:25.029687 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:30:25.029786 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:30:25.040066 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:30:25.040939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:30:25.041064 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:30:25.044087 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:30:25.045662 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:30:25.045798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:30:25.047899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:30:25.048050 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:30:25.053393 ignition[994]: INFO : Ignition 2.19.0 May 13 00:30:25.053393 ignition[994]: INFO : Stage: umount May 13 00:30:25.053393 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:30:25.053393 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:30:25.059227 ignition[994]: INFO : umount: umount passed May 13 00:30:25.059227 ignition[994]: INFO : Ignition finished successfully May 13 00:30:25.055594 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:30:25.056878 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:30:25.058971 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:30:25.059413 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:30:25.059525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:30:25.061694 systemd[1]: Stopped target network.target - Network. May 13 00:30:25.062760 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:30:25.062849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:30:25.064365 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:30:25.064409 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:30:25.066055 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:30:25.066095 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:30:25.067482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:30:25.067520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:30:25.069094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:30:25.070610 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:30:25.076013 systemd-networkd[761]: eth0: DHCPv6 lease lost May 13 00:30:25.077945 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:30:25.078058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:30:25.079392 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:30:25.079491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:30:25.081664 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:30:25.081702 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:30:25.088952 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:30:25.090678 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:30:25.090736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:30:25.092999 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:30:25.093048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:30:25.094880 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:30:25.094928 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:30:25.096733 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:30:25.096779 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:30:25.099146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:30:25.109667 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:30:25.109775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:30:25.111714 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:30:25.111820 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:30:25.113584 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:30:25.113655 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:30:25.115529 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:30:25.115595 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:30:25.116875 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:30:25.116915 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:30:25.120325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:30:25.120373 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:30:25.122902 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:30:25.122945 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:30:25.125547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:30:25.125593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:30:25.128431 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:30:25.128476 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:30:25.143052 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:30:25.144068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:30:25.144130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:30:25.146199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:30:25.146244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:30:25.148406 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:30:25.148485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:30:25.150476 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:30:25.152556 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:30:25.161536 systemd[1]: Switching root. May 13 00:30:25.187618 systemd-journald[238]: Journal stopped May 13 00:30:25.866644 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:30:25.866702 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:30:25.866715 kernel: SELinux: policy capability open_perms=1 May 13 00:30:25.866724 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:30:25.866739 kernel: SELinux: policy capability always_check_network=0 May 13 00:30:25.866752 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:30:25.866761 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:30:25.866770 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:30:25.866780 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:30:25.866793 kernel: audit: type=1403 audit(1747096225.326:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:30:25.866803 systemd[1]: Successfully loaded SELinux policy in 31.779ms. May 13 00:30:25.866819 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.206ms. May 13 00:30:25.866831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:30:25.866892 systemd[1]: Detected virtualization kvm. May 13 00:30:25.866908 systemd[1]: Detected architecture arm64. May 13 00:30:25.866919 systemd[1]: Detected first boot. May 13 00:30:25.866930 systemd[1]: Initializing machine ID from VM UUID. May 13 00:30:25.866941 zram_generator::config[1040]: No configuration found. May 13 00:30:25.866953 systemd[1]: Populated /etc with preset unit settings. May 13 00:30:25.866963 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:30:25.866976 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:30:25.866987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:30:25.867000 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:30:25.867010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:30:25.867020 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:30:25.867031 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:30:25.867041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:30:25.867052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:30:25.867063 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:30:25.867084 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:30:25.867097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:30:25.867108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:30:25.867119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:30:25.867129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:30:25.867140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:30:25.867150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:30:25.867161 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:30:25.867171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:30:25.867181 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:30:25.867193 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:30:25.867204 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:30:25.867216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:30:25.867226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:30:25.867237 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:30:25.867247 systemd[1]: Reached target slices.target - Slice Units. May 13 00:30:25.867258 systemd[1]: Reached target swap.target - Swaps. May 13 00:30:25.867268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:30:25.867280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:30:25.867291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:30:25.867302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:30:25.867312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:30:25.867323 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:30:25.867333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:30:25.867343 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:30:25.867354 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:30:25.867364 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:30:25.867376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:30:25.867386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:30:25.867397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:30:25.867412 systemd[1]: Reached target machines.target - Containers. May 13 00:30:25.867423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:30:25.867433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:30:25.867445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:30:25.867456 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:30:25.867468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:30:25.867478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:30:25.867489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:30:25.867499 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:30:25.867510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:30:25.867520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:30:25.867531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:30:25.867541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:30:25.867551 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:30:25.867563 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:30:25.867573 kernel: fuse: init (API version 7.39) May 13 00:30:25.867583 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:30:25.867594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:30:25.867604 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:30:25.867614 kernel: loop: module loaded May 13 00:30:25.867624 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:30:25.867635 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:30:25.867646 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:30:25.867658 systemd[1]: Stopped verity-setup.service. May 13 00:30:25.867670 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:30:25.867680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:30:25.867690 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:30:25.867701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:30:25.867713 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:30:25.867723 kernel: ACPI: bus type drm_connector registered May 13 00:30:25.867758 systemd-journald[1111]: Collecting audit messages is disabled. May 13 00:30:25.867779 systemd-journald[1111]: Journal started May 13 00:30:25.867801 systemd-journald[1111]: Runtime Journal (/run/log/journal/3754086f6c16437897f4f3b350203923) is 5.9M, max 47.3M, 41.4M free. May 13 00:30:25.679615 systemd[1]: Queued start job for default target multi-user.target. May 13 00:30:25.700403 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:30:25.700738 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:30:25.869874 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:30:25.870285 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:30:25.871952 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:30:25.873212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:30:25.874454 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:30:25.874587 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:30:25.875728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:30:25.876901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:30:25.878190 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:30:25.878323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:30:25.879598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:30:25.879730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:30:25.881147 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:30:25.881274 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:30:25.882527 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:30:25.882653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:30:25.884004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:30:25.885457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:30:25.886915 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:30:25.897955 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:30:25.911963 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:30:25.914021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:30:25.914823 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:30:25.914886 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:30:25.916499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:30:25.918401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:30:25.920182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:30:25.921078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:30:25.922685 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:30:25.924317 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:30:25.925547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:30:25.929003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:30:25.930070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:30:25.932165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:30:25.933429 systemd-journald[1111]: Time spent on flushing to /var/log/journal/3754086f6c16437897f4f3b350203923 is 16.570ms for 853 entries. May 13 00:30:25.933429 systemd-journald[1111]: System Journal (/var/log/journal/3754086f6c16437897f4f3b350203923) is 8.0M, max 195.6M, 187.6M free. May 13 00:30:25.966626 systemd-journald[1111]: Received client request to flush runtime journal. May 13 00:30:25.966698 kernel: loop0: detected capacity change from 0 to 189592 May 13 00:30:25.935545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:30:25.940183 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:30:25.942797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:30:25.944264 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:30:25.945248 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:30:25.946327 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:30:25.954309 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:30:25.958066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:30:25.969524 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:30:25.974675 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:30:25.978883 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:30:25.979210 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:30:25.983100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:30:25.989687 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:30:25.990254 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:30:25.992556 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:30:26.000883 kernel: loop1: detected capacity change from 0 to 114432 May 13 00:30:26.005062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:30:26.011352 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:30:26.025238 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 13 00:30:26.025515 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 13 00:30:26.031946 kernel: loop2: detected capacity change from 0 to 114328 May 13 00:30:26.032245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:30:26.073863 kernel: loop3: detected capacity change from 0 to 189592 May 13 00:30:26.078855 kernel: loop4: detected capacity change from 0 to 114432 May 13 00:30:26.082856 kernel: loop5: detected capacity change from 0 to 114328 May 13 00:30:26.085764 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:30:26.086147 (sd-merge)[1176]: Merged extensions into '/usr'. May 13 00:30:26.089370 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:30:26.089383 systemd[1]: Reloading... May 13 00:30:26.141920 zram_generator::config[1199]: No configuration found. May 13 00:30:26.202234 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:30:26.230944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:26.266524 systemd[1]: Reloading finished in 176 ms. May 13 00:30:26.305990 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:30:26.307462 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:30:26.319040 systemd[1]: Starting ensure-sysext.service... May 13 00:30:26.320604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:30:26.335447 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... May 13 00:30:26.335462 systemd[1]: Reloading... May 13 00:30:26.340043 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:30:26.340297 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:30:26.340938 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:30:26.341155 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 13 00:30:26.341208 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. May 13 00:30:26.343535 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:30:26.343546 systemd-tmpfiles[1237]: Skipping /boot May 13 00:30:26.350291 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:30:26.350303 systemd-tmpfiles[1237]: Skipping /boot May 13 00:30:26.378864 zram_generator::config[1262]: No configuration found. May 13 00:30:26.461557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:26.497870 systemd[1]: Reloading finished in 162 ms. May 13 00:30:26.515896 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:30:26.528388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:30:26.535416 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:30:26.537734 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:30:26.539670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:30:26.543195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:30:26.548408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:30:26.554216 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:30:26.557794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:30:26.576470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:30:26.582132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:30:26.582792 systemd-udevd[1311]: Using default interface naming scheme 'v255'. May 13 00:30:26.585652 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:30:26.587149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:30:26.589010 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:30:26.591279 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:30:26.593466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:30:26.603009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:30:26.603136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:30:26.604621 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:30:26.606395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:30:26.606510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:30:26.608143 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:30:26.608251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:30:26.620362 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:30:26.630327 systemd[1]: Finished ensure-sysext.service. May 13 00:30:26.631398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:30:26.636857 augenrules[1350]: No rules May 13 00:30:26.638347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:30:26.646869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1334) May 13 00:30:26.649127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:30:26.651994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:30:26.655298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:30:26.656326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:30:26.663008 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:30:26.670312 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:30:26.674150 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:30:26.676787 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:30:26.677101 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:30:26.680956 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:30:26.682163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:30:26.682291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:30:26.683452 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:30:26.683645 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:30:26.685291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:30:26.685796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:30:26.689366 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:30:26.689501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:30:26.690703 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:30:26.707635 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:30:26.711821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:30:26.717378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:30:26.718303 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:30:26.718364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:30:26.739201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:30:26.752303 systemd-resolved[1304]: Positive Trust Anchors: May 13 00:30:26.756881 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:30:26.756917 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:30:26.762049 systemd-networkd[1366]: lo: Link UP May 13 00:30:26.762058 systemd-networkd[1366]: lo: Gained carrier May 13 00:30:26.767551 systemd-networkd[1366]: Enumeration completed May 13 00:30:26.767658 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:30:26.768288 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:30:26.768349 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:30:26.768412 systemd-resolved[1304]: Defaulting to hostname 'linux'. May 13 00:30:26.769071 systemd-networkd[1366]: eth0: Link UP May 13 00:30:26.769152 systemd-networkd[1366]: eth0: Gained carrier May 13 00:30:26.769208 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:30:26.780015 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:30:26.780967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:30:26.781807 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:30:26.787222 systemd[1]: Reached target network.target - Network. May 13 00:30:26.787910 systemd-networkd[1366]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:30:26.788477 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:30:26.788949 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. May 13 00:30:26.789415 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:30:26.790495 systemd-timesyncd[1367]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:30:26.790543 systemd-timesyncd[1367]: Initial clock synchronization to Tue 2025-05-13 00:30:27.140132 UTC. May 13 00:30:26.791352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:30:26.795691 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:30:26.798013 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:30:26.815120 lvm[1391]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:30:26.828920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:30:26.846279 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:30:26.847448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:30:26.848289 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:30:26.849137 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:30:26.850021 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:30:26.851062 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:30:26.851915 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:30:26.852776 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:30:26.853699 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:30:26.853730 systemd[1]: Reached target paths.target - Path Units. May 13 00:30:26.854400 systemd[1]: Reached target timers.target - Timer Units. May 13 00:30:26.855802 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:30:26.858019 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:30:26.869779 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:30:26.871711 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:30:26.873029 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:30:26.873913 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:30:26.874672 systemd[1]: Reached target basic.target - Basic System. May 13 00:30:26.875503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:30:26.875531 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:30:26.876323 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:30:26.877892 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:30:26.880973 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:30:26.881973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:30:26.891044 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:30:26.892013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:30:26.894816 jq[1403]: false May 13 00:30:26.896030 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:30:26.897986 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:30:26.899816 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:30:26.902660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:30:26.907026 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:30:26.910498 extend-filesystems[1404]: Found loop3 May 13 00:30:26.911486 extend-filesystems[1404]: Found loop4 May 13 00:30:26.911486 extend-filesystems[1404]: Found loop5 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda May 13 00:30:26.911486 extend-filesystems[1404]: Found vda1 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda2 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda3 May 13 00:30:26.911486 extend-filesystems[1404]: Found usr May 13 00:30:26.911486 extend-filesystems[1404]: Found vda4 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda6 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda7 May 13 00:30:26.911486 extend-filesystems[1404]: Found vda9 May 13 00:30:26.911486 extend-filesystems[1404]: Checking size of /dev/vda9 May 13 00:30:26.911025 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:30:26.911450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:30:26.921054 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:30:26.926054 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:30:26.927754 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:30:26.929089 dbus-daemon[1402]: [system] SELinux support is enabled May 13 00:30:26.929425 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:30:26.935657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:30:26.935825 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:30:26.936429 jq[1422]: true May 13 00:30:26.936108 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:30:26.936238 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:30:26.939725 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:30:26.940128 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:30:26.944317 extend-filesystems[1404]: Resized partition /dev/vda9 May 13 00:30:26.956671 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:30:26.961973 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) May 13 00:30:26.956710 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:30:26.957785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:30:26.957802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:30:26.965724 jq[1427]: true May 13 00:30:26.967207 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1331) May 13 00:30:26.967893 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:30:26.971315 update_engine[1418]: I20250513 00:30:26.970402 1418 main.cc:92] Flatcar Update Engine starting May 13 00:30:26.971520 tar[1426]: linux-arm64/helm May 13 00:30:26.974378 systemd[1]: Started update-engine.service - Update Engine. May 13 00:30:26.974495 update_engine[1418]: I20250513 00:30:26.974440 1418 update_check_scheduler.cc:74] Next update check in 8m5s May 13 00:30:26.978031 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:30:26.983093 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:30:27.028654 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:30:27.031414 systemd-logind[1415]: New seat seat0. May 13 00:30:27.035117 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:30:27.054898 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:30:27.067005 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:30:27.067005 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:30:27.067005 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:30:27.072962 extend-filesystems[1404]: Resized filesystem in /dev/vda9 May 13 00:30:27.074205 bash[1457]: Updated "/home/core/.ssh/authorized_keys" May 13 00:30:27.069956 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:30:27.070816 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:30:27.074491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:30:27.078170 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:30:27.099257 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:30:27.227274 containerd[1439]: time="2025-05-13T00:30:27.227133084Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:30:27.254153 containerd[1439]: time="2025-05-13T00:30:27.254041799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.255470 containerd[1439]: time="2025-05-13T00:30:27.255430902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:30:27.255470 containerd[1439]: time="2025-05-13T00:30:27.255464718Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:30:27.255566 containerd[1439]: time="2025-05-13T00:30:27.255480917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:30:27.255680 containerd[1439]: time="2025-05-13T00:30:27.255649122Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:30:27.255680 containerd[1439]: time="2025-05-13T00:30:27.255674004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.255753 containerd[1439]: time="2025-05-13T00:30:27.255737253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:30:27.255776 containerd[1439]: time="2025-05-13T00:30:27.255753828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.255974 containerd[1439]: time="2025-05-13T00:30:27.255942573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:30:27.255974 containerd[1439]: time="2025-05-13T00:30:27.255964074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.256020 containerd[1439]: time="2025-05-13T00:30:27.255978352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:30:27.256020 containerd[1439]: time="2025-05-13T00:30:27.255989499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.256081 containerd[1439]: time="2025-05-13T00:30:27.256063978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.256302 containerd[1439]: time="2025-05-13T00:30:27.256268881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:30:27.256403 containerd[1439]: time="2025-05-13T00:30:27.256384316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:30:27.256432 containerd[1439]: time="2025-05-13T00:30:27.256402184Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:30:27.256490 containerd[1439]: time="2025-05-13T00:30:27.256476371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:30:27.256533 containerd[1439]: time="2025-05-13T00:30:27.256520834Z" level=info msg="metadata content store policy set" policy=shared May 13 00:30:27.261258 containerd[1439]: time="2025-05-13T00:30:27.261219018Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:30:27.261399 containerd[1439]: time="2025-05-13T00:30:27.261287778Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:30:27.261428 containerd[1439]: time="2025-05-13T00:30:27.261407471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:30:27.261467 containerd[1439]: time="2025-05-13T00:30:27.261428429Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:30:27.261467 containerd[1439]: time="2025-05-13T00:30:27.261445254Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:30:27.261711 containerd[1439]: time="2025-05-13T00:30:27.261679881Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:30:27.262233 containerd[1439]: time="2025-05-13T00:30:27.262200779Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:30:27.262411 containerd[1439]: time="2025-05-13T00:30:27.262391320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:30:27.262438 containerd[1439]: time="2025-05-13T00:30:27.262414323Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:30:27.262438 containerd[1439]: time="2025-05-13T00:30:27.262428184Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:30:27.262575 containerd[1439]: time="2025-05-13T00:30:27.262545790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262601 containerd[1439]: time="2025-05-13T00:30:27.262588457Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262621 containerd[1439]: time="2025-05-13T00:30:27.262605783Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262640 containerd[1439]: time="2025-05-13T00:30:27.262621480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262659 containerd[1439]: time="2025-05-13T00:30:27.262644901Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262678 containerd[1439]: time="2025-05-13T00:30:27.262661851Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262701 containerd[1439]: time="2025-05-13T00:30:27.262677966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262701 containerd[1439]: time="2025-05-13T00:30:27.262690825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:30:27.262737 containerd[1439]: time="2025-05-13T00:30:27.262717920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262756 containerd[1439]: time="2025-05-13T00:30:27.262735537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262756 containerd[1439]: time="2025-05-13T00:30:27.262749398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262800 containerd[1439]: time="2025-05-13T00:30:27.262761964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262800 containerd[1439]: time="2025-05-13T00:30:27.262779582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262857 containerd[1439]: time="2025-05-13T00:30:27.262802544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262857 containerd[1439]: time="2025-05-13T00:30:27.262825589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262919 containerd[1439]: time="2025-05-13T00:30:27.262900069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262943 containerd[1439]: time="2025-05-13T00:30:27.262931255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262962 containerd[1439]: time="2025-05-13T00:30:27.262954968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:30:27.262991 containerd[1439]: time="2025-05-13T00:30:27.262978389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:30:27.263017 containerd[1439]: time="2025-05-13T00:30:27.262997886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:30:27.263038 containerd[1439]: time="2025-05-13T00:30:27.263015420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:30:27.263038 containerd[1439]: time="2025-05-13T00:30:27.263032454Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:30:27.263077 containerd[1439]: time="2025-05-13T00:30:27.263062596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:30:27.263098 containerd[1439]: time="2025-05-13T00:30:27.263077417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:30:27.263098 containerd[1439]: time="2025-05-13T00:30:27.263089441Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:30:27.264150 containerd[1439]: time="2025-05-13T00:30:27.264111698Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:30:27.264187 containerd[1439]: time="2025-05-13T00:30:27.264159876Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:30:27.264187 containerd[1439]: time="2025-05-13T00:30:27.264175365Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:30:27.264236 containerd[1439]: time="2025-05-13T00:30:27.264190854Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:30:27.264236 containerd[1439]: time="2025-05-13T00:30:27.264202710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:30:27.264236 containerd[1439]: time="2025-05-13T00:30:27.264228761Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:30:27.264289 containerd[1439]: time="2025-05-13T00:30:27.264240827Z" level=info msg="NRI interface is disabled by configuration." May 13 00:30:27.264289 containerd[1439]: time="2025-05-13T00:30:27.264253268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:30:27.269436 containerd[1439]: time="2025-05-13T00:30:27.265200378Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:30:27.269436 containerd[1439]: time="2025-05-13T00:30:27.265282497Z" level=info msg="Connect containerd service" May 13 00:30:27.271009 containerd[1439]: time="2025-05-13T00:30:27.270968371Z" level=info msg="using legacy CRI server" May 13 00:30:27.271064 containerd[1439]: time="2025-05-13T00:30:27.271032914Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.271188553Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272351420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272425023Z" level=info msg="Start subscribing containerd event" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272465477Z" level=info msg="Start recovering state" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272537911Z" level=info msg="Start event monitor" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272548557Z" level=info msg="Start snapshots syncer" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272559829Z" level=info msg="Start cni network conf syncer for default" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.272850275Z" level=info msg="Start streaming server" May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.273409581Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.273457884Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:30:27.273899 containerd[1439]: time="2025-05-13T00:30:27.273516374Z" level=info msg="containerd successfully booted in 0.047458s" May 13 00:30:27.273615 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:30:27.349766 tar[1426]: linux-arm64/LICENSE May 13 00:30:27.349906 tar[1426]: linux-arm64/README.md May 13 00:30:27.360122 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:30:27.576742 sshd_keygen[1421]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:30:27.596034 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:30:27.607149 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:30:27.613249 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:30:27.614918 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:30:27.617485 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:30:27.633930 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:30:27.646221 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:30:27.648300 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:30:27.649647 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:30:28.146015 systemd-networkd[1366]: eth0: Gained IPv6LL May 13 00:30:28.149082 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:30:28.150677 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:30:28.163100 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:30:28.165562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:28.167571 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:30:28.181839 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:30:28.182174 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:30:28.184279 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:30:28.187612 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:30:28.674986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:28.676550 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:30:28.678098 systemd[1]: Startup finished in 553ms (kernel) + 4.626s (initrd) + 3.383s (userspace) = 8.563s. May 13 00:30:28.678920 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:30:29.129208 kubelet[1516]: E0513 00:30:29.129091 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:30:29.131796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:30:29.131969 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:30:33.211569 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:30:33.212693 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:40582.service - OpenSSH per-connection server daemon (10.0.0.1:40582). May 13 00:30:33.261812 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 40582 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:33.263541 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:33.273703 systemd-logind[1415]: New session 1 of user core. May 13 00:30:33.274739 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:30:33.289196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:30:33.299073 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:30:33.302259 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:30:33.308020 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:30:33.384495 systemd[1533]: Queued start job for default target default.target. May 13 00:30:33.396945 systemd[1533]: Created slice app.slice - User Application Slice. May 13 00:30:33.396975 systemd[1533]: Reached target paths.target - Paths. May 13 00:30:33.396987 systemd[1533]: Reached target timers.target - Timers. May 13 00:30:33.398468 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:30:33.409178 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:30:33.409291 systemd[1533]: Reached target sockets.target - Sockets. May 13 00:30:33.409304 systemd[1533]: Reached target basic.target - Basic System. May 13 00:30:33.409340 systemd[1533]: Reached target default.target - Main User Target. May 13 00:30:33.409365 systemd[1533]: Startup finished in 95ms. May 13 00:30:33.409561 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:30:33.419081 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:30:33.481497 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:40588.service - OpenSSH per-connection server daemon (10.0.0.1:40588). May 13 00:30:33.516729 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 40588 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:33.518263 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:33.522549 systemd-logind[1415]: New session 2 of user core. May 13 00:30:33.532007 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:30:33.585074 sshd[1544]: pam_unix(sshd:session): session closed for user core May 13 00:30:33.594438 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:40588.service: Deactivated successfully. May 13 00:30:33.598279 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:30:33.600809 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. May 13 00:30:33.610136 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:40600.service - OpenSSH per-connection server daemon (10.0.0.1:40600). May 13 00:30:33.611192 systemd-logind[1415]: Removed session 2. May 13 00:30:33.642545 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 40600 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:33.643935 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:33.648029 systemd-logind[1415]: New session 3 of user core. May 13 00:30:33.658030 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:30:33.707341 sshd[1551]: pam_unix(sshd:session): session closed for user core May 13 00:30:33.720372 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:40600.service: Deactivated successfully. May 13 00:30:33.721906 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:30:33.723239 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. May 13 00:30:33.724599 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:40610.service - OpenSSH per-connection server daemon (10.0.0.1:40610). May 13 00:30:33.725466 systemd-logind[1415]: Removed session 3. May 13 00:30:33.762440 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:33.763700 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:33.767790 systemd-logind[1415]: New session 4 of user core. May 13 00:30:33.781068 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:30:33.834391 sshd[1558]: pam_unix(sshd:session): session closed for user core May 13 00:30:33.844459 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:40610.service: Deactivated successfully. May 13 00:30:33.846281 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:30:33.847725 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. May 13 00:30:33.848837 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:40614.service - OpenSSH per-connection server daemon (10.0.0.1:40614). May 13 00:30:33.849562 systemd-logind[1415]: Removed session 4. May 13 00:30:33.887490 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 40614 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:33.888885 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:33.892570 systemd-logind[1415]: New session 5 of user core. May 13 00:30:33.905035 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:30:33.965850 sudo[1568]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:30:33.966157 sudo[1568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:30:34.320112 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:30:34.320215 (dockerd)[1586]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:30:34.596392 dockerd[1586]: time="2025-05-13T00:30:34.596254770Z" level=info msg="Starting up" May 13 00:30:34.769150 dockerd[1586]: time="2025-05-13T00:30:34.768829182Z" level=info msg="Loading containers: start." May 13 00:30:34.856890 kernel: Initializing XFRM netlink socket May 13 00:30:34.928431 systemd-networkd[1366]: docker0: Link UP May 13 00:30:34.945150 dockerd[1586]: time="2025-05-13T00:30:34.945050384Z" level=info msg="Loading containers: done." May 13 00:30:34.964102 dockerd[1586]: time="2025-05-13T00:30:34.964015588Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:30:34.964233 dockerd[1586]: time="2025-05-13T00:30:34.964148308Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:30:34.964292 dockerd[1586]: time="2025-05-13T00:30:34.964262352Z" level=info msg="Daemon has completed initialization" May 13 00:30:34.991523 dockerd[1586]: time="2025-05-13T00:30:34.991398216Z" level=info msg="API listen on /run/docker.sock" May 13 00:30:34.991631 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:30:35.550537 containerd[1439]: time="2025-05-13T00:30:35.550495924Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 00:30:36.270403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673517441.mount: Deactivated successfully. May 13 00:30:37.725528 containerd[1439]: time="2025-05-13T00:30:37.725476178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:37.726540 containerd[1439]: time="2025-05-13T00:30:37.726474853Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 13 00:30:37.727274 containerd[1439]: time="2025-05-13T00:30:37.727198764Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:37.730068 containerd[1439]: time="2025-05-13T00:30:37.730030966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:37.731510 containerd[1439]: time="2025-05-13T00:30:37.731285672Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.180746945s" May 13 00:30:37.731510 containerd[1439]: time="2025-05-13T00:30:37.731323421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 00:30:37.732204 containerd[1439]: time="2025-05-13T00:30:37.732178341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 00:30:39.382415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:30:39.391256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:39.480457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:39.484214 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:30:39.511795 containerd[1439]: time="2025-05-13T00:30:39.511118630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:39.511795 containerd[1439]: time="2025-05-13T00:30:39.511657614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 13 00:30:39.513327 containerd[1439]: time="2025-05-13T00:30:39.513289256Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:39.516208 containerd[1439]: time="2025-05-13T00:30:39.516176860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:39.517754 containerd[1439]: time="2025-05-13T00:30:39.517716781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.785502735s" May 13 00:30:39.517788 containerd[1439]: time="2025-05-13T00:30:39.517756689Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 00:30:39.518829 containerd[1439]: time="2025-05-13T00:30:39.518792934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 00:30:39.524410 kubelet[1801]: E0513 00:30:39.524368 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:30:39.527433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:30:39.527579 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:30:40.913091 containerd[1439]: time="2025-05-13T00:30:40.913037004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.913586 containerd[1439]: time="2025-05-13T00:30:40.913550934Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 13 00:30:40.914393 containerd[1439]: time="2025-05-13T00:30:40.914341256Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.917082 containerd[1439]: time="2025-05-13T00:30:40.917023040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.918542 containerd[1439]: time="2025-05-13T00:30:40.918213865Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.399389872s" May 13 00:30:40.918542 containerd[1439]: time="2025-05-13T00:30:40.918250787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 00:30:40.918641 containerd[1439]: time="2025-05-13T00:30:40.918611182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:30:41.875711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275867013.mount: Deactivated successfully. May 13 00:30:42.090956 containerd[1439]: time="2025-05-13T00:30:42.090898454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:42.091872 containerd[1439]: time="2025-05-13T00:30:42.091828468Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 13 00:30:42.092606 containerd[1439]: time="2025-05-13T00:30:42.092573718Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:42.094546 containerd[1439]: time="2025-05-13T00:30:42.094500015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:42.095105 containerd[1439]: time="2025-05-13T00:30:42.095067060Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.176429536s" May 13 00:30:42.095105 containerd[1439]: time="2025-05-13T00:30:42.095102709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 00:30:42.095628 containerd[1439]: time="2025-05-13T00:30:42.095486278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:30:42.740383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371853182.mount: Deactivated successfully. May 13 00:30:43.362230 containerd[1439]: time="2025-05-13T00:30:43.361940973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.363066 containerd[1439]: time="2025-05-13T00:30:43.362805533Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 00:30:43.363730 containerd[1439]: time="2025-05-13T00:30:43.363699644Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.366650 containerd[1439]: time="2025-05-13T00:30:43.366617708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.368014 containerd[1439]: time="2025-05-13T00:30:43.367983119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.272405754s" May 13 00:30:43.368014 containerd[1439]: time="2025-05-13T00:30:43.368019184Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:30:43.368657 containerd[1439]: time="2025-05-13T00:30:43.368522248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:30:43.975611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571201938.mount: Deactivated successfully. May 13 00:30:43.979732 containerd[1439]: time="2025-05-13T00:30:43.979690359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.980462 containerd[1439]: time="2025-05-13T00:30:43.980377286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 00:30:43.981132 containerd[1439]: time="2025-05-13T00:30:43.981094369Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.985075 containerd[1439]: time="2025-05-13T00:30:43.984308995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:43.985075 containerd[1439]: time="2025-05-13T00:30:43.984896854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 616.279397ms" May 13 00:30:43.985075 containerd[1439]: time="2025-05-13T00:30:43.984939795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 00:30:43.985667 containerd[1439]: time="2025-05-13T00:30:43.985443059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 00:30:44.510362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189698120.mount: Deactivated successfully. May 13 00:30:46.910864 containerd[1439]: time="2025-05-13T00:30:46.910801190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:46.911582 containerd[1439]: time="2025-05-13T00:30:46.911548966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 13 00:30:46.912271 containerd[1439]: time="2025-05-13T00:30:46.912211569Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:46.915886 containerd[1439]: time="2025-05-13T00:30:46.915850589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:46.917680 containerd[1439]: time="2025-05-13T00:30:46.917635299Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.932165961s" May 13 00:30:46.917680 containerd[1439]: time="2025-05-13T00:30:46.917672226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 00:30:49.777973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:30:49.785199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:49.880343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:49.883967 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:30:49.916079 kubelet[1952]: E0513 00:30:49.916023 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:30:49.918622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:30:49.918772 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:30:52.256798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:52.265153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:52.285541 systemd[1]: Reloading requested from client PID 1968 ('systemctl') (unit session-5.scope)... May 13 00:30:52.285559 systemd[1]: Reloading... May 13 00:30:52.353944 zram_generator::config[2010]: No configuration found. May 13 00:30:52.468544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:52.522356 systemd[1]: Reloading finished in 236 ms. May 13 00:30:52.565559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:52.568124 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:30:52.568346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:52.569759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:52.669035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:52.672512 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:30:52.707309 kubelet[2054]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:52.707309 kubelet[2054]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:30:52.707309 kubelet[2054]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:52.707597 kubelet[2054]: I0513 00:30:52.707408 2054 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:30:53.175367 kubelet[2054]: I0513 00:30:53.175324 2054 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:30:53.175367 kubelet[2054]: I0513 00:30:53.175354 2054 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:30:53.175624 kubelet[2054]: I0513 00:30:53.175600 2054 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:30:53.205034 kubelet[2054]: E0513 00:30:53.204993 2054 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:53.207560 kubelet[2054]: I0513 00:30:53.207530 2054 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:30:53.217225 kubelet[2054]: E0513 00:30:53.217052 2054 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:30:53.217225 kubelet[2054]: I0513 00:30:53.217091 2054 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:30:53.222213 kubelet[2054]: I0513 00:30:53.220353 2054 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:30:53.226204 kubelet[2054]: I0513 00:30:53.226164 2054 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:30:53.226342 kubelet[2054]: I0513 00:30:53.226308 2054 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:30:53.226495 kubelet[2054]: I0513 00:30:53.226332 2054 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:30:53.226644 kubelet[2054]: I0513 00:30:53.226621 2054 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:30:53.226644 kubelet[2054]: I0513 00:30:53.226634 2054 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:30:53.226828 kubelet[2054]: I0513 00:30:53.226806 2054 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:53.230329 kubelet[2054]: I0513 00:30:53.230071 2054 kubelet.go:408] "Attempting to sync node with API server" May 13 00:30:53.230329 kubelet[2054]: I0513 00:30:53.230103 2054 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:30:53.230329 kubelet[2054]: I0513 00:30:53.230193 2054 kubelet.go:314] "Adding apiserver pod source" May 13 00:30:53.230329 kubelet[2054]: I0513 00:30:53.230204 2054 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:30:53.233033 kubelet[2054]: W0513 00:30:53.232933 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:53.233033 kubelet[2054]: W0513 00:30:53.232966 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:53.233033 kubelet[2054]: E0513 00:30:53.232995 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:53.233033 kubelet[2054]: E0513 00:30:53.233012 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:53.234286 kubelet[2054]: I0513 00:30:53.234240 2054 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:30:53.236031 kubelet[2054]: I0513 00:30:53.236014 2054 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:30:53.238561 kubelet[2054]: W0513 00:30:53.238528 2054 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:30:53.241101 kubelet[2054]: I0513 00:30:53.241078 2054 server.go:1269] "Started kubelet" May 13 00:30:53.241934 kubelet[2054]: I0513 00:30:53.241612 2054 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:30:53.241934 kubelet[2054]: I0513 00:30:53.241694 2054 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:30:53.241934 kubelet[2054]: I0513 00:30:53.241881 2054 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:30:53.242314 kubelet[2054]: I0513 00:30:53.242295 2054 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:30:53.242547 kubelet[2054]: I0513 00:30:53.242530 2054 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:30:53.243657 kubelet[2054]: I0513 00:30:53.243625 2054 server.go:460] "Adding debug handlers to kubelet server" May 13 00:30:53.244557 kubelet[2054]: E0513 00:30:53.244527 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:53.245353 kubelet[2054]: I0513 00:30:53.244631 2054 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:30:53.245353 kubelet[2054]: I0513 00:30:53.244799 2054 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:30:53.245353 kubelet[2054]: I0513 00:30:53.244895 2054 reconciler.go:26] "Reconciler: start to sync state" May 13 00:30:53.245353 kubelet[2054]: W0513 00:30:53.245187 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:53.245353 kubelet[2054]: E0513 00:30:53.245226 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:53.245823 kubelet[2054]: E0513 00:30:53.245572 2054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" May 13 00:30:53.245928 kubelet[2054]: I0513 00:30:53.245904 2054 factory.go:221] Registration of the systemd container factory successfully May 13 00:30:53.245988 kubelet[2054]: I0513 00:30:53.245972 2054 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:30:53.247126 kubelet[2054]: E0513 00:30:53.245434 2054 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eeec92ad93826 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:30:53.241047078 +0000 UTC m=+0.565634388,LastTimestamp:2025-05-13 00:30:53.241047078 +0000 UTC m=+0.565634388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:30:53.247314 kubelet[2054]: E0513 00:30:53.247295 2054 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:30:53.249122 kubelet[2054]: I0513 00:30:53.249062 2054 factory.go:221] Registration of the containerd container factory successfully May 13 00:30:53.255938 kubelet[2054]: I0513 00:30:53.255885 2054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:30:53.256832 kubelet[2054]: I0513 00:30:53.256812 2054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:30:53.256930 kubelet[2054]: I0513 00:30:53.256876 2054 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:30:53.256930 kubelet[2054]: I0513 00:30:53.256892 2054 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:30:53.257866 kubelet[2054]: E0513 00:30:53.257823 2054 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:30:53.258403 kubelet[2054]: W0513 00:30:53.258332 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:53.258403 kubelet[2054]: E0513 00:30:53.258374 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:53.261451 kubelet[2054]: I0513 00:30:53.261432 2054 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:30:53.261451 kubelet[2054]: I0513 00:30:53.261448 2054 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:30:53.261553 kubelet[2054]: I0513 00:30:53.261464 2054 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:53.323424 kubelet[2054]: I0513 00:30:53.323388 2054 policy_none.go:49] "None policy: Start" May 13 00:30:53.324246 kubelet[2054]: I0513 00:30:53.324186 2054 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:30:53.324246 kubelet[2054]: I0513 00:30:53.324225 2054 state_mem.go:35] "Initializing new in-memory state store" May 13 00:30:53.330377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:30:53.341140 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:30:53.343700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:30:53.345389 kubelet[2054]: E0513 00:30:53.345364 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:53.351888 kubelet[2054]: I0513 00:30:53.351769 2054 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:30:53.352032 kubelet[2054]: I0513 00:30:53.352015 2054 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:30:53.352076 kubelet[2054]: I0513 00:30:53.352032 2054 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:30:53.352263 kubelet[2054]: I0513 00:30:53.352244 2054 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:30:53.353210 kubelet[2054]: E0513 00:30:53.353187 2054 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:30:53.369290 systemd[1]: Created slice kubepods-burstable-pod0b12612aae3d6ed5fcc556a196752a89.slice - libcontainer container kubepods-burstable-pod0b12612aae3d6ed5fcc556a196752a89.slice. May 13 00:30:53.386169 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 00:30:53.390756 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 00:30:53.447895 kubelet[2054]: E0513 00:30:53.446767 2054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" May 13 00:30:53.453778 kubelet[2054]: I0513 00:30:53.453753 2054 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:30:53.454154 kubelet[2054]: E0513 00:30:53.454113 2054 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 13 00:30:53.545546 kubelet[2054]: I0513 00:30:53.545501 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:53.545546 kubelet[2054]: I0513 00:30:53.545536 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:53.545652 kubelet[2054]: I0513 00:30:53.545562 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:53.545652 kubelet[2054]: I0513 00:30:53.545583 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:53.545652 kubelet[2054]: I0513 00:30:53.545602 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:53.545652 kubelet[2054]: I0513 00:30:53.545619 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:53.545652 kubelet[2054]: I0513 00:30:53.545636 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:30:53.545756 kubelet[2054]: I0513 00:30:53.545651 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:53.545756 kubelet[2054]: I0513 00:30:53.545666 2054 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:53.655849 kubelet[2054]: I0513 00:30:53.655815 2054 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:30:53.656164 kubelet[2054]: E0513 00:30:53.656127 2054 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 13 00:30:53.688040 kubelet[2054]: E0513 00:30:53.688005 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:53.688090 kubelet[2054]: E0513 00:30:53.688073 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:53.688686 containerd[1439]: time="2025-05-13T00:30:53.688643823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 00:30:53.688985 containerd[1439]: time="2025-05-13T00:30:53.688696695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b12612aae3d6ed5fcc556a196752a89,Namespace:kube-system,Attempt:0,}" May 13 00:30:53.693185 kubelet[2054]: E0513 00:30:53.693159 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:53.693528 containerd[1439]: time="2025-05-13T00:30:53.693498807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 00:30:53.847901 kubelet[2054]: E0513 00:30:53.847762 2054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" May 13 00:30:54.058148 kubelet[2054]: I0513 00:30:54.058110 2054 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:30:54.058467 kubelet[2054]: E0513 00:30:54.058419 2054 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 13 00:30:54.189769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900674516.mount: Deactivated successfully. May 13 00:30:54.208280 containerd[1439]: time="2025-05-13T00:30:54.208229846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:54.209511 containerd[1439]: time="2025-05-13T00:30:54.209482092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:30:54.210090 containerd[1439]: time="2025-05-13T00:30:54.210034948Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:54.210895 containerd[1439]: time="2025-05-13T00:30:54.210867736Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:54.211258 containerd[1439]: time="2025-05-13T00:30:54.211234892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:30:54.211920 containerd[1439]: time="2025-05-13T00:30:54.211894875Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:54.212469 containerd[1439]: time="2025-05-13T00:30:54.212446209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:30:54.214081 containerd[1439]: time="2025-05-13T00:30:54.214041142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:54.217159 containerd[1439]: time="2025-05-13T00:30:54.217110505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.545811ms" May 13 00:30:54.218484 containerd[1439]: time="2025-05-13T00:30:54.218454700Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.692918ms" May 13 00:30:54.225610 containerd[1439]: time="2025-05-13T00:30:54.225563336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.84285ms" May 13 00:30:54.398970 containerd[1439]: time="2025-05-13T00:30:54.398676410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:54.398970 containerd[1439]: time="2025-05-13T00:30:54.398730114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:54.398970 containerd[1439]: time="2025-05-13T00:30:54.398746534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.399552 containerd[1439]: time="2025-05-13T00:30:54.399278645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:54.399552 containerd[1439]: time="2025-05-13T00:30:54.399331828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:54.399552 containerd[1439]: time="2025-05-13T00:30:54.399343322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.399552 containerd[1439]: time="2025-05-13T00:30:54.399421374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.399552 containerd[1439]: time="2025-05-13T00:30:54.399326101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.400088 containerd[1439]: time="2025-05-13T00:30:54.399933782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:54.400088 containerd[1439]: time="2025-05-13T00:30:54.399988247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:54.400088 containerd[1439]: time="2025-05-13T00:30:54.400010914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.400446 containerd[1439]: time="2025-05-13T00:30:54.400334698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:54.422036 systemd[1]: Started cri-containerd-260e7b8ade03d7bfb796cb18642db6ca261b165e23e3dd7871b21586644e3962.scope - libcontainer container 260e7b8ade03d7bfb796cb18642db6ca261b165e23e3dd7871b21586644e3962. May 13 00:30:54.423460 systemd[1]: Started cri-containerd-b7deab2f2eb46c49bf0cee760ad49abbf94fb6d7753999a9901d3c55d1f217bf.scope - libcontainer container b7deab2f2eb46c49bf0cee760ad49abbf94fb6d7753999a9901d3c55d1f217bf. May 13 00:30:54.424805 systemd[1]: Started cri-containerd-f72bfac7e903045e9bad5c2693d791956b9193026bc8bbcea5695a26d4064e93.scope - libcontainer container f72bfac7e903045e9bad5c2693d791956b9193026bc8bbcea5695a26d4064e93. May 13 00:30:54.454515 containerd[1439]: time="2025-05-13T00:30:54.454395693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"260e7b8ade03d7bfb796cb18642db6ca261b165e23e3dd7871b21586644e3962\"" May 13 00:30:54.456003 containerd[1439]: time="2025-05-13T00:30:54.455701603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7deab2f2eb46c49bf0cee760ad49abbf94fb6d7753999a9901d3c55d1f217bf\"" May 13 00:30:54.456975 kubelet[2054]: E0513 00:30:54.456914 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:54.457050 kubelet[2054]: E0513 00:30:54.456914 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:54.457912 containerd[1439]: time="2025-05-13T00:30:54.457866732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b12612aae3d6ed5fcc556a196752a89,Namespace:kube-system,Attempt:0,} returns sandbox id \"f72bfac7e903045e9bad5c2693d791956b9193026bc8bbcea5695a26d4064e93\"" May 13 00:30:54.458649 kubelet[2054]: E0513 00:30:54.458627 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:54.463510 containerd[1439]: time="2025-05-13T00:30:54.463481315Z" level=info msg="CreateContainer within sandbox \"b7deab2f2eb46c49bf0cee760ad49abbf94fb6d7753999a9901d3c55d1f217bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:30:54.463935 containerd[1439]: time="2025-05-13T00:30:54.463814550Z" level=info msg="CreateContainer within sandbox \"260e7b8ade03d7bfb796cb18642db6ca261b165e23e3dd7871b21586644e3962\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:30:54.464349 containerd[1439]: time="2025-05-13T00:30:54.464313302Z" level=info msg="CreateContainer within sandbox \"f72bfac7e903045e9bad5c2693d791956b9193026bc8bbcea5695a26d4064e93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:30:54.478051 containerd[1439]: time="2025-05-13T00:30:54.478016604Z" level=info msg="CreateContainer within sandbox \"260e7b8ade03d7bfb796cb18642db6ca261b165e23e3dd7871b21586644e3962\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a72cde24c49138dad4b3d11e016ed26b490e0ece3fc8b77d30db5561db09a2a3\"" May 13 00:30:54.478783 containerd[1439]: time="2025-05-13T00:30:54.478758885Z" level=info msg="StartContainer for \"a72cde24c49138dad4b3d11e016ed26b490e0ece3fc8b77d30db5561db09a2a3\"" May 13 00:30:54.481467 containerd[1439]: time="2025-05-13T00:30:54.481429975Z" level=info msg="CreateContainer within sandbox \"b7deab2f2eb46c49bf0cee760ad49abbf94fb6d7753999a9901d3c55d1f217bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b63931dc33761c5accea7e89eb07874ed7ffc7048af9f5c39ef23233a1aad3e5\"" May 13 00:30:54.481907 containerd[1439]: time="2025-05-13T00:30:54.481878587Z" level=info msg="StartContainer for \"b63931dc33761c5accea7e89eb07874ed7ffc7048af9f5c39ef23233a1aad3e5\"" May 13 00:30:54.484903 containerd[1439]: time="2025-05-13T00:30:54.484866853Z" level=info msg="CreateContainer within sandbox \"f72bfac7e903045e9bad5c2693d791956b9193026bc8bbcea5695a26d4064e93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"98cbae54130192ada198e1bbdffbdb4cf5768bb6786a54521fcb397929c3ce6c\"" May 13 00:30:54.485611 containerd[1439]: time="2025-05-13T00:30:54.485589591Z" level=info msg="StartContainer for \"98cbae54130192ada198e1bbdffbdb4cf5768bb6786a54521fcb397929c3ce6c\"" May 13 00:30:54.501620 kubelet[2054]: W0513 00:30:54.501527 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:54.501736 kubelet[2054]: E0513 00:30:54.501641 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:54.503011 systemd[1]: Started cri-containerd-a72cde24c49138dad4b3d11e016ed26b490e0ece3fc8b77d30db5561db09a2a3.scope - libcontainer container a72cde24c49138dad4b3d11e016ed26b490e0ece3fc8b77d30db5561db09a2a3. May 13 00:30:54.517084 systemd[1]: Started cri-containerd-98cbae54130192ada198e1bbdffbdb4cf5768bb6786a54521fcb397929c3ce6c.scope - libcontainer container 98cbae54130192ada198e1bbdffbdb4cf5768bb6786a54521fcb397929c3ce6c. May 13 00:30:54.518240 systemd[1]: Started cri-containerd-b63931dc33761c5accea7e89eb07874ed7ffc7048af9f5c39ef23233a1aad3e5.scope - libcontainer container b63931dc33761c5accea7e89eb07874ed7ffc7048af9f5c39ef23233a1aad3e5. May 13 00:30:54.549640 containerd[1439]: time="2025-05-13T00:30:54.547864093Z" level=info msg="StartContainer for \"a72cde24c49138dad4b3d11e016ed26b490e0ece3fc8b77d30db5561db09a2a3\" returns successfully" May 13 00:30:54.549640 containerd[1439]: time="2025-05-13T00:30:54.547957203Z" level=info msg="StartContainer for \"b63931dc33761c5accea7e89eb07874ed7ffc7048af9f5c39ef23233a1aad3e5\" returns successfully" May 13 00:30:54.567154 containerd[1439]: time="2025-05-13T00:30:54.562900937Z" level=info msg="StartContainer for \"98cbae54130192ada198e1bbdffbdb4cf5768bb6786a54521fcb397929c3ce6c\" returns successfully" May 13 00:30:54.595277 kubelet[2054]: W0513 00:30:54.594019 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:54.595277 kubelet[2054]: E0513 00:30:54.594091 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:54.595277 kubelet[2054]: W0513 00:30:54.594354 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:54.595277 kubelet[2054]: E0513 00:30:54.594410 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:54.642975 kubelet[2054]: W0513 00:30:54.642880 2054 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 13 00:30:54.642975 kubelet[2054]: E0513 00:30:54.642937 2054 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 13 00:30:54.648485 kubelet[2054]: E0513 00:30:54.648448 2054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" May 13 00:30:54.860147 kubelet[2054]: I0513 00:30:54.859986 2054 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:30:55.266925 kubelet[2054]: E0513 00:30:55.266126 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.268327 kubelet[2054]: E0513 00:30:55.268279 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.274212 kubelet[2054]: E0513 00:30:55.274192 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:56.064414 kubelet[2054]: I0513 00:30:56.064376 2054 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:30:56.064414 kubelet[2054]: E0513 00:30:56.064414 2054 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 00:30:56.076784 kubelet[2054]: E0513 00:30:56.076741 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.177435 kubelet[2054]: E0513 00:30:56.177383 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.275092 kubelet[2054]: E0513 00:30:56.274898 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:56.275092 kubelet[2054]: E0513 00:30:56.275009 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:56.278232 kubelet[2054]: E0513 00:30:56.278214 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.379252 kubelet[2054]: E0513 00:30:56.378913 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.479364 kubelet[2054]: E0513 00:30:56.479337 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.579772 kubelet[2054]: E0513 00:30:56.579748 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.680242 kubelet[2054]: E0513 00:30:56.680157 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.781211 kubelet[2054]: E0513 00:30:56.781170 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.882105 kubelet[2054]: E0513 00:30:56.882070 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:56.983001 kubelet[2054]: E0513 00:30:56.982878 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.083675 kubelet[2054]: E0513 00:30:57.083628 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.184155 kubelet[2054]: E0513 00:30:57.184116 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.285300 kubelet[2054]: E0513 00:30:57.285205 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.385942 kubelet[2054]: E0513 00:30:57.385903 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.487052 kubelet[2054]: E0513 00:30:57.486989 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.587693 kubelet[2054]: E0513 00:30:57.587584 2054 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:57.990497 kubelet[2054]: E0513 00:30:57.990384 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:58.116959 systemd[1]: Reloading requested from client PID 2333 ('systemctl') (unit session-5.scope)... May 13 00:30:58.116976 systemd[1]: Reloading... May 13 00:30:58.180869 zram_generator::config[2372]: No configuration found. May 13 00:30:58.235814 kubelet[2054]: I0513 00:30:58.235789 2054 apiserver.go:52] "Watching apiserver" May 13 00:30:58.245690 kubelet[2054]: I0513 00:30:58.245584 2054 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:30:58.264179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:58.277134 kubelet[2054]: E0513 00:30:58.277113 2054 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:58.330398 systemd[1]: Reloading finished in 213 ms. May 13 00:30:58.361163 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:58.371980 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:30:58.372949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:58.389053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:58.473410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:58.477310 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:30:58.513834 kubelet[2414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:58.513834 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:30:58.513834 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:58.514124 kubelet[2414]: I0513 00:30:58.513851 2414 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:30:58.519690 kubelet[2414]: I0513 00:30:58.519656 2414 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:30:58.520308 kubelet[2414]: I0513 00:30:58.519789 2414 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:30:58.520308 kubelet[2414]: I0513 00:30:58.520029 2414 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:30:58.521357 kubelet[2414]: I0513 00:30:58.521293 2414 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:30:58.524529 kubelet[2414]: I0513 00:30:58.524509 2414 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:30:58.529478 kubelet[2414]: E0513 00:30:58.529438 2414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:30:58.529478 kubelet[2414]: I0513 00:30:58.529464 2414 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:30:58.531290 kubelet[2414]: I0513 00:30:58.531262 2414 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:30:58.531385 kubelet[2414]: I0513 00:30:58.531369 2414 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:30:58.531549 kubelet[2414]: I0513 00:30:58.531528 2414 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:30:58.531708 kubelet[2414]: I0513 00:30:58.531549 2414 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:30:58.531787 kubelet[2414]: I0513 00:30:58.531711 2414 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:30:58.531787 kubelet[2414]: I0513 00:30:58.531719 2414 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:30:58.531787 kubelet[2414]: I0513 00:30:58.531750 2414 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:58.531888 kubelet[2414]: I0513 00:30:58.531861 2414 kubelet.go:408] "Attempting to sync node with API server" May 13 00:30:58.531888 kubelet[2414]: I0513 00:30:58.531873 2414 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:30:58.531930 kubelet[2414]: I0513 00:30:58.531894 2414 kubelet.go:314] "Adding apiserver pod source" May 13 00:30:58.531930 kubelet[2414]: I0513 00:30:58.531903 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:30:58.532831 kubelet[2414]: I0513 00:30:58.532288 2414 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:30:58.532831 kubelet[2414]: I0513 00:30:58.532715 2414 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:30:58.533120 kubelet[2414]: I0513 00:30:58.533097 2414 server.go:1269] "Started kubelet" May 13 00:30:58.535956 kubelet[2414]: I0513 00:30:58.535882 2414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:30:58.536145 kubelet[2414]: I0513 00:30:58.536122 2414 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:30:58.536202 kubelet[2414]: I0513 00:30:58.536176 2414 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:30:58.536883 kubelet[2414]: E0513 00:30:58.536861 2414 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:30:58.536993 kubelet[2414]: I0513 00:30:58.536959 2414 server.go:460] "Adding debug handlers to kubelet server" May 13 00:30:58.537709 kubelet[2414]: I0513 00:30:58.537675 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:30:58.537959 kubelet[2414]: I0513 00:30:58.537938 2414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:30:58.539421 kubelet[2414]: I0513 00:30:58.539400 2414 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:30:58.539981 kubelet[2414]: I0513 00:30:58.539966 2414 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:30:58.542123 kubelet[2414]: E0513 00:30:58.540796 2414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:58.542227 kubelet[2414]: I0513 00:30:58.541599 2414 reconciler.go:26] "Reconciler: start to sync state" May 13 00:30:58.551174 kubelet[2414]: I0513 00:30:58.551119 2414 factory.go:221] Registration of the containerd container factory successfully May 13 00:30:58.551174 kubelet[2414]: I0513 00:30:58.551136 2414 factory.go:221] Registration of the systemd container factory successfully May 13 00:30:58.551375 kubelet[2414]: I0513 00:30:58.551207 2414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:30:58.564032 kubelet[2414]: I0513 00:30:58.563990 2414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:30:58.566265 kubelet[2414]: I0513 00:30:58.566235 2414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:30:58.566265 kubelet[2414]: I0513 00:30:58.566259 2414 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:30:58.566342 kubelet[2414]: I0513 00:30:58.566275 2414 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:30:58.566342 kubelet[2414]: E0513 00:30:58.566326 2414 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:30:58.587598 kubelet[2414]: I0513 00:30:58.587573 2414 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:30:58.587598 kubelet[2414]: I0513 00:30:58.587591 2414 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:30:58.587708 kubelet[2414]: I0513 00:30:58.587611 2414 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:58.587757 kubelet[2414]: I0513 00:30:58.587739 2414 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:30:58.587786 kubelet[2414]: I0513 00:30:58.587756 2414 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:30:58.587786 kubelet[2414]: I0513 00:30:58.587774 2414 policy_none.go:49] "None policy: Start" May 13 00:30:58.588376 kubelet[2414]: I0513 00:30:58.588357 2414 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:30:58.588436 kubelet[2414]: I0513 00:30:58.588384 2414 state_mem.go:35] "Initializing new in-memory state store" May 13 00:30:58.588538 kubelet[2414]: I0513 00:30:58.588521 2414 state_mem.go:75] "Updated machine memory state" May 13 00:30:58.592095 kubelet[2414]: I0513 00:30:58.592072 2414 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:30:58.592250 kubelet[2414]: I0513 00:30:58.592225 2414 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:30:58.592295 kubelet[2414]: I0513 00:30:58.592242 2414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:30:58.592624 kubelet[2414]: I0513 00:30:58.592401 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:30:58.675211 kubelet[2414]: E0513 00:30:58.675175 2414 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.695809 kubelet[2414]: I0513 00:30:58.695774 2414 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:30:58.704335 kubelet[2414]: I0513 00:30:58.704301 2414 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 00:30:58.704413 kubelet[2414]: I0513 00:30:58.704383 2414 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:30:58.743116 kubelet[2414]: I0513 00:30:58.743069 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:58.743116 kubelet[2414]: I0513 00:30:58.743109 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:58.743267 kubelet[2414]: I0513 00:30:58.743137 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.743267 kubelet[2414]: I0513 00:30:58.743178 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.743267 kubelet[2414]: I0513 00:30:58.743222 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.743267 kubelet[2414]: I0513 00:30:58.743265 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:30:58.743359 kubelet[2414]: I0513 00:30:58.743282 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b12612aae3d6ed5fcc556a196752a89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b12612aae3d6ed5fcc556a196752a89\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:58.743359 kubelet[2414]: I0513 00:30:58.743298 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.743359 kubelet[2414]: I0513 00:30:58.743327 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:58.974500 kubelet[2414]: E0513 00:30:58.974380 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:58.975483 kubelet[2414]: E0513 00:30:58.975452 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:58.975599 kubelet[2414]: E0513 00:30:58.975575 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.532713 kubelet[2414]: I0513 00:30:59.532679 2414 apiserver.go:52] "Watching apiserver" May 13 00:30:59.542937 kubelet[2414]: I0513 00:30:59.542900 2414 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:30:59.577942 kubelet[2414]: E0513 00:30:59.577685 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.577942 kubelet[2414]: E0513 00:30:59.577829 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.584032 kubelet[2414]: E0513 00:30:59.583942 2414 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:30:59.584571 kubelet[2414]: E0513 00:30:59.584282 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.598734 kubelet[2414]: I0513 00:30:59.598679 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5986635919999999 podStartE2EDuration="1.598663592s" podCreationTimestamp="2025-05-13 00:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:59.598569646 +0000 UTC m=+1.118526477" watchObservedRunningTime="2025-05-13 00:30:59.598663592 +0000 UTC m=+1.118620383" May 13 00:30:59.624080 kubelet[2414]: I0513 00:30:59.623949 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.623931207 podStartE2EDuration="1.623931207s" podCreationTimestamp="2025-05-13 00:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:59.606946392 +0000 UTC m=+1.126903263" watchObservedRunningTime="2025-05-13 00:30:59.623931207 +0000 UTC m=+1.143888038" May 13 00:30:59.646688 kubelet[2414]: I0513 00:30:59.646375 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.646359019 podStartE2EDuration="2.646359019s" podCreationTimestamp="2025-05-13 00:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:59.624133429 +0000 UTC m=+1.144090260" watchObservedRunningTime="2025-05-13 00:30:59.646359019 +0000 UTC m=+1.166315810" May 13 00:30:59.837988 sudo[1568]: pam_unix(sudo:session): session closed for user root May 13 00:30:59.839956 sshd[1565]: pam_unix(sshd:session): session closed for user core May 13 00:30:59.843497 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:40614.service: Deactivated successfully. May 13 00:30:59.845654 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:30:59.846032 systemd[1]: session-5.scope: Consumed 6.539s CPU time, 155.7M memory peak, 0B memory swap peak. May 13 00:30:59.847078 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. May 13 00:30:59.848313 systemd-logind[1415]: Removed session 5. May 13 00:31:00.579238 kubelet[2414]: E0513 00:31:00.579209 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:01.580211 kubelet[2414]: E0513 00:31:01.580184 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:02.882029 kubelet[2414]: E0513 00:31:02.881932 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:04.426718 kubelet[2414]: I0513 00:31:04.426678 2414 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:31:04.427163 containerd[1439]: time="2025-05-13T00:31:04.427098660Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:31:04.427467 kubelet[2414]: I0513 00:31:04.427266 2414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:31:04.742576 kubelet[2414]: E0513 00:31:04.742464 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.183957 systemd[1]: Created slice kubepods-besteffort-pod8b94a99c_499a_48b8_a3b4_1b083181eb90.slice - libcontainer container kubepods-besteffort-pod8b94a99c_499a_48b8_a3b4_1b083181eb90.slice. May 13 00:31:05.185629 kubelet[2414]: I0513 00:31:05.185582 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ee112e02-6e99-45a3-b70b-f9c67ee61579-cni-plugin\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185629 kubelet[2414]: I0513 00:31:05.185620 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ee112e02-6e99-45a3-b70b-f9c67ee61579-flannel-cfg\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185739 kubelet[2414]: I0513 00:31:05.185644 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnk95\" (UniqueName: \"kubernetes.io/projected/ee112e02-6e99-45a3-b70b-f9c67ee61579-kube-api-access-fnk95\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185739 kubelet[2414]: I0513 00:31:05.185664 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b94a99c-499a-48b8-a3b4-1b083181eb90-kube-proxy\") pod \"kube-proxy-g6tt2\" (UID: \"8b94a99c-499a-48b8-a3b4-1b083181eb90\") " pod="kube-system/kube-proxy-g6tt2" May 13 00:31:05.185739 kubelet[2414]: I0513 00:31:05.185679 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fxd\" (UniqueName: \"kubernetes.io/projected/8b94a99c-499a-48b8-a3b4-1b083181eb90-kube-api-access-s7fxd\") pod \"kube-proxy-g6tt2\" (UID: \"8b94a99c-499a-48b8-a3b4-1b083181eb90\") " pod="kube-system/kube-proxy-g6tt2" May 13 00:31:05.185739 kubelet[2414]: I0513 00:31:05.185693 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee112e02-6e99-45a3-b70b-f9c67ee61579-xtables-lock\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185739 kubelet[2414]: I0513 00:31:05.185712 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b94a99c-499a-48b8-a3b4-1b083181eb90-xtables-lock\") pod \"kube-proxy-g6tt2\" (UID: \"8b94a99c-499a-48b8-a3b4-1b083181eb90\") " pod="kube-system/kube-proxy-g6tt2" May 13 00:31:05.185884 kubelet[2414]: I0513 00:31:05.185726 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ee112e02-6e99-45a3-b70b-f9c67ee61579-run\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185884 kubelet[2414]: I0513 00:31:05.185742 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ee112e02-6e99-45a3-b70b-f9c67ee61579-cni\") pod \"kube-flannel-ds-6vllt\" (UID: \"ee112e02-6e99-45a3-b70b-f9c67ee61579\") " pod="kube-flannel/kube-flannel-ds-6vllt" May 13 00:31:05.185884 kubelet[2414]: I0513 00:31:05.185758 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b94a99c-499a-48b8-a3b4-1b083181eb90-lib-modules\") pod \"kube-proxy-g6tt2\" (UID: \"8b94a99c-499a-48b8-a3b4-1b083181eb90\") " pod="kube-system/kube-proxy-g6tt2" May 13 00:31:05.198727 systemd[1]: Created slice kubepods-burstable-podee112e02_6e99_45a3_b70b_f9c67ee61579.slice - libcontainer container kubepods-burstable-podee112e02_6e99_45a3_b70b_f9c67ee61579.slice. May 13 00:31:05.496525 kubelet[2414]: E0513 00:31:05.496393 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.497112 containerd[1439]: time="2025-05-13T00:31:05.497070766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6tt2,Uid:8b94a99c-499a-48b8-a3b4-1b083181eb90,Namespace:kube-system,Attempt:0,}" May 13 00:31:05.501448 kubelet[2414]: E0513 00:31:05.501422 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.501921 containerd[1439]: time="2025-05-13T00:31:05.501883027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6vllt,Uid:ee112e02-6e99-45a3-b70b-f9c67ee61579,Namespace:kube-flannel,Attempt:0,}" May 13 00:31:05.517228 containerd[1439]: time="2025-05-13T00:31:05.517114167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:31:05.517228 containerd[1439]: time="2025-05-13T00:31:05.517185294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:31:05.517228 containerd[1439]: time="2025-05-13T00:31:05.517210110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:05.517396 containerd[1439]: time="2025-05-13T00:31:05.517300889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:05.523771 containerd[1439]: time="2025-05-13T00:31:05.523649593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:31:05.523771 containerd[1439]: time="2025-05-13T00:31:05.523737410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:31:05.523771 containerd[1439]: time="2025-05-13T00:31:05.523754741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:05.524002 containerd[1439]: time="2025-05-13T00:31:05.523877021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:05.545113 systemd[1]: Started cri-containerd-2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430.scope - libcontainer container 2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430. May 13 00:31:05.546442 systemd[1]: Started cri-containerd-6fd3966c349a486917ae4d03818b2cc857278fad97c7d46a1f1cccc2546cdf80.scope - libcontainer container 6fd3966c349a486917ae4d03818b2cc857278fad97c7d46a1f1cccc2546cdf80. May 13 00:31:05.568560 containerd[1439]: time="2025-05-13T00:31:05.568494621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6tt2,Uid:8b94a99c-499a-48b8-a3b4-1b083181eb90,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fd3966c349a486917ae4d03818b2cc857278fad97c7d46a1f1cccc2546cdf80\"" May 13 00:31:05.569464 kubelet[2414]: E0513 00:31:05.569249 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.572312 containerd[1439]: time="2025-05-13T00:31:05.572275968Z" level=info msg="CreateContainer within sandbox \"6fd3966c349a486917ae4d03818b2cc857278fad97c7d46a1f1cccc2546cdf80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:31:05.577528 containerd[1439]: time="2025-05-13T00:31:05.577478564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6vllt,Uid:ee112e02-6e99-45a3-b70b-f9c67ee61579,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\"" May 13 00:31:05.578701 kubelet[2414]: E0513 00:31:05.578680 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.579648 containerd[1439]: time="2025-05-13T00:31:05.579620802Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:31:05.589017 containerd[1439]: time="2025-05-13T00:31:05.588975067Z" level=info msg="CreateContainer within sandbox \"6fd3966c349a486917ae4d03818b2cc857278fad97c7d46a1f1cccc2546cdf80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba1bd8d8e288296d21b8f40604cb76b247b6a3592d6c7c0f185333dff8633a97\"" May 13 00:31:05.589532 containerd[1439]: time="2025-05-13T00:31:05.589416395Z" level=info msg="StartContainer for \"ba1bd8d8e288296d21b8f40604cb76b247b6a3592d6c7c0f185333dff8633a97\"" May 13 00:31:05.591235 kubelet[2414]: E0513 00:31:05.591181 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:05.616002 systemd[1]: Started cri-containerd-ba1bd8d8e288296d21b8f40604cb76b247b6a3592d6c7c0f185333dff8633a97.scope - libcontainer container ba1bd8d8e288296d21b8f40604cb76b247b6a3592d6c7c0f185333dff8633a97. May 13 00:31:05.638497 containerd[1439]: time="2025-05-13T00:31:05.638370905Z" level=info msg="StartContainer for \"ba1bd8d8e288296d21b8f40604cb76b247b6a3592d6c7c0f185333dff8633a97\" returns successfully" May 13 00:31:06.593697 kubelet[2414]: E0513 00:31:06.593645 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:06.605327 kubelet[2414]: I0513 00:31:06.602817 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g6tt2" podStartSLOduration=1.602791285 podStartE2EDuration="1.602791285s" podCreationTimestamp="2025-05-13 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:31:06.60208813 +0000 UTC m=+8.122044921" watchObservedRunningTime="2025-05-13 00:31:06.602791285 +0000 UTC m=+8.122748116" May 13 00:31:06.834155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598099378.mount: Deactivated successfully. May 13 00:31:06.859373 containerd[1439]: time="2025-05-13T00:31:06.859254029Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.860207 containerd[1439]: time="2025-05-13T00:31:06.859973353Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 13 00:31:06.860905 containerd[1439]: time="2025-05-13T00:31:06.860831283Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.863124 containerd[1439]: time="2025-05-13T00:31:06.863071107Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.864062 containerd[1439]: time="2025-05-13T00:31:06.864027498Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.284374075s" May 13 00:31:06.864062 containerd[1439]: time="2025-05-13T00:31:06.864061960Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 13 00:31:06.866451 containerd[1439]: time="2025-05-13T00:31:06.866419576Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:31:06.878789 containerd[1439]: time="2025-05-13T00:31:06.878736667Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a\"" May 13 00:31:06.879313 containerd[1439]: time="2025-05-13T00:31:06.879173056Z" level=info msg="StartContainer for \"57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a\"" May 13 00:31:06.902017 systemd[1]: Started cri-containerd-57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a.scope - libcontainer container 57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a. May 13 00:31:06.928253 containerd[1439]: time="2025-05-13T00:31:06.928216640Z" level=info msg="StartContainer for \"57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a\" returns successfully" May 13 00:31:06.934075 systemd[1]: cri-containerd-57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a.scope: Deactivated successfully. May 13 00:31:06.968689 containerd[1439]: time="2025-05-13T00:31:06.968635174Z" level=info msg="shim disconnected" id=57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a namespace=k8s.io May 13 00:31:06.968689 containerd[1439]: time="2025-05-13T00:31:06.968683203Z" level=warning msg="cleaning up after shim disconnected" id=57c563beb2c07b9a34abe60f530d68d2b69b851cc5793cc9c99da0ae4513270a namespace=k8s.io May 13 00:31:06.968689 containerd[1439]: time="2025-05-13T00:31:06.968691528Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:31:07.596390 kubelet[2414]: E0513 00:31:07.596137 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:07.596390 kubelet[2414]: E0513 00:31:07.596215 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:07.597346 containerd[1439]: time="2025-05-13T00:31:07.597226394Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:31:08.764553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185019677.mount: Deactivated successfully. May 13 00:31:09.356603 containerd[1439]: time="2025-05-13T00:31:09.356347930Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.357534 containerd[1439]: time="2025-05-13T00:31:09.357211144Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 13 00:31:09.358314 containerd[1439]: time="2025-05-13T00:31:09.358278386Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.361275 containerd[1439]: time="2025-05-13T00:31:09.361235421Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.362542 containerd[1439]: time="2025-05-13T00:31:09.362455263Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.765189687s" May 13 00:31:09.362542 containerd[1439]: time="2025-05-13T00:31:09.362490402Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 13 00:31:09.364932 containerd[1439]: time="2025-05-13T00:31:09.364894346Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:31:09.378445 containerd[1439]: time="2025-05-13T00:31:09.378398931Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6\"" May 13 00:31:09.378827 containerd[1439]: time="2025-05-13T00:31:09.378798141Z" level=info msg="StartContainer for \"f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6\"" May 13 00:31:09.405037 systemd[1]: Started cri-containerd-f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6.scope - libcontainer container f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6. May 13 00:31:09.444555 systemd[1]: cri-containerd-f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6.scope: Deactivated successfully. May 13 00:31:09.487954 containerd[1439]: time="2025-05-13T00:31:09.487818734Z" level=info msg="StartContainer for \"f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6\" returns successfully" May 13 00:31:09.516552 containerd[1439]: time="2025-05-13T00:31:09.516480413Z" level=info msg="shim disconnected" id=f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6 namespace=k8s.io May 13 00:31:09.516916 containerd[1439]: time="2025-05-13T00:31:09.516735347Z" level=warning msg="cleaning up after shim disconnected" id=f8463bdd6f1158c7eb67cef88ee4bb6c49a4a42e397af69ea066c770923b99d6 namespace=k8s.io May 13 00:31:09.516916 containerd[1439]: time="2025-05-13T00:31:09.516750875Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:31:09.532607 kubelet[2414]: I0513 00:31:09.532530 2414 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:31:09.557073 systemd[1]: Created slice kubepods-burstable-pod82bb55cf_fcf8_4be5_9394_37d759a44e39.slice - libcontainer container kubepods-burstable-pod82bb55cf_fcf8_4be5_9394_37d759a44e39.slice. May 13 00:31:09.561535 systemd[1]: Created slice kubepods-burstable-pod4020d366_67ac_4fb9_8c36_e05727ce41a8.slice - libcontainer container kubepods-burstable-pod4020d366_67ac_4fb9_8c36_e05727ce41a8.slice. May 13 00:31:09.604771 kubelet[2414]: E0513 00:31:09.604218 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:09.606816 containerd[1439]: time="2025-05-13T00:31:09.606644526Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:31:09.614063 kubelet[2414]: I0513 00:31:09.614012 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdj8\" (UniqueName: \"kubernetes.io/projected/82bb55cf-fcf8-4be5-9394-37d759a44e39-kube-api-access-grdj8\") pod \"coredns-6f6b679f8f-hvrfx\" (UID: \"82bb55cf-fcf8-4be5-9394-37d759a44e39\") " pod="kube-system/coredns-6f6b679f8f-hvrfx" May 13 00:31:09.614063 kubelet[2414]: I0513 00:31:09.614052 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbfrs\" (UniqueName: \"kubernetes.io/projected/4020d366-67ac-4fb9-8c36-e05727ce41a8-kube-api-access-rbfrs\") pod \"coredns-6f6b679f8f-bbptk\" (UID: \"4020d366-67ac-4fb9-8c36-e05727ce41a8\") " pod="kube-system/coredns-6f6b679f8f-bbptk" May 13 00:31:09.614179 kubelet[2414]: I0513 00:31:09.614095 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4020d366-67ac-4fb9-8c36-e05727ce41a8-config-volume\") pod \"coredns-6f6b679f8f-bbptk\" (UID: \"4020d366-67ac-4fb9-8c36-e05727ce41a8\") " pod="kube-system/coredns-6f6b679f8f-bbptk" May 13 00:31:09.614179 kubelet[2414]: I0513 00:31:09.614115 2414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82bb55cf-fcf8-4be5-9394-37d759a44e39-config-volume\") pod \"coredns-6f6b679f8f-hvrfx\" (UID: \"82bb55cf-fcf8-4be5-9394-37d759a44e39\") " pod="kube-system/coredns-6f6b679f8f-hvrfx" May 13 00:31:09.621098 containerd[1439]: time="2025-05-13T00:31:09.621059029Z" level=info msg="CreateContainer within sandbox \"2c70797e9a2b7da199c0dd241f942276f919ff6ca70b3e8c2e58b339da2d1430\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"05d3ee944d104f364de1914504e2625c5735bea95fce7345ab84149ba5a1598a\"" May 13 00:31:09.621505 containerd[1439]: time="2025-05-13T00:31:09.621480331Z" level=info msg="StartContainer for \"05d3ee944d104f364de1914504e2625c5735bea95fce7345ab84149ba5a1598a\"" May 13 00:31:09.648252 systemd[1]: Started cri-containerd-05d3ee944d104f364de1914504e2625c5735bea95fce7345ab84149ba5a1598a.scope - libcontainer container 05d3ee944d104f364de1914504e2625c5735bea95fce7345ab84149ba5a1598a. May 13 00:31:09.670397 containerd[1439]: time="2025-05-13T00:31:09.668795262Z" level=info msg="StartContainer for \"05d3ee944d104f364de1914504e2625c5735bea95fce7345ab84149ba5a1598a\" returns successfully" May 13 00:31:09.860871 kubelet[2414]: E0513 00:31:09.860485 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:09.861233 containerd[1439]: time="2025-05-13T00:31:09.860994614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hvrfx,Uid:82bb55cf-fcf8-4be5-9394-37d759a44e39,Namespace:kube-system,Attempt:0,}" May 13 00:31:09.865629 kubelet[2414]: E0513 00:31:09.865565 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:09.866074 containerd[1439]: time="2025-05-13T00:31:09.866037347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbptk,Uid:4020d366-67ac-4fb9-8c36-e05727ce41a8,Namespace:kube-system,Attempt:0,}" May 13 00:31:09.906987 containerd[1439]: time="2025-05-13T00:31:09.906927419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hvrfx,Uid:82bb55cf-fcf8-4be5-9394-37d759a44e39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:31:09.907390 kubelet[2414]: E0513 00:31:09.907242 2414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:31:09.907390 kubelet[2414]: E0513 00:31:09.907307 2414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-hvrfx" May 13 00:31:09.910618 kubelet[2414]: E0513 00:31:09.910311 2414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-hvrfx" May 13 00:31:09.910618 kubelet[2414]: E0513 00:31:09.910387 2414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hvrfx_kube-system(82bb55cf-fcf8-4be5-9394-37d759a44e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hvrfx_kube-system(82bb55cf-fcf8-4be5-9394-37d759a44e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-hvrfx" podUID="82bb55cf-fcf8-4be5-9394-37d759a44e39" May 13 00:31:09.914828 containerd[1439]: time="2025-05-13T00:31:09.914774947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbptk,Uid:4020d366-67ac-4fb9-8c36-e05727ce41a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83d0fa55f3194ce566d935fa7f5bb47cfb6826fabe1076e3e6f423f5c65e0b15\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:31:09.915016 kubelet[2414]: E0513 00:31:09.914989 2414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d0fa55f3194ce566d935fa7f5bb47cfb6826fabe1076e3e6f423f5c65e0b15\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:31:09.915050 kubelet[2414]: E0513 00:31:09.915036 2414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d0fa55f3194ce566d935fa7f5bb47cfb6826fabe1076e3e6f423f5c65e0b15\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-bbptk" May 13 00:31:09.915087 kubelet[2414]: E0513 00:31:09.915055 2414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d0fa55f3194ce566d935fa7f5bb47cfb6826fabe1076e3e6f423f5c65e0b15\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-bbptk" May 13 00:31:09.915112 kubelet[2414]: E0513 00:31:09.915096 2414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bbptk_kube-system(4020d366-67ac-4fb9-8c36-e05727ce41a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bbptk_kube-system(4020d366-67ac-4fb9-8c36-e05727ce41a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83d0fa55f3194ce566d935fa7f5bb47cfb6826fabe1076e3e6f423f5c65e0b15\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-bbptk" podUID="4020d366-67ac-4fb9-8c36-e05727ce41a8" May 13 00:31:09.954330 kubelet[2414]: E0513 00:31:09.954266 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:10.607592 kubelet[2414]: E0513 00:31:10.607560 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:10.618057 kubelet[2414]: I0513 00:31:10.617984 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6vllt" podStartSLOduration=1.833838465 podStartE2EDuration="5.617968852s" podCreationTimestamp="2025-05-13 00:31:05 +0000 UTC" firstStartedPulling="2025-05-13 00:31:05.579271654 +0000 UTC m=+7.099228485" lastFinishedPulling="2025-05-13 00:31:09.363402081 +0000 UTC m=+10.883358872" observedRunningTime="2025-05-13 00:31:10.617690353 +0000 UTC m=+12.137647184" watchObservedRunningTime="2025-05-13 00:31:10.617968852 +0000 UTC m=+12.137925683" May 13 00:31:10.695999 systemd[1]: run-netns-cni\x2dfa02d98e\x2d304d\x2d9030\x2dc084\x2df6378c1dc319.mount: Deactivated successfully. May 13 00:31:10.696093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e57d45305d2647d939e4fabcb8300dbf0633899ffd51e6a3f6422158cf5d4bfc-shm.mount: Deactivated successfully. May 13 00:31:10.791625 systemd-networkd[1366]: flannel.1: Link UP May 13 00:31:10.791631 systemd-networkd[1366]: flannel.1: Gained carrier May 13 00:31:11.609655 kubelet[2414]: E0513 00:31:11.609577 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:12.050075 systemd-networkd[1366]: flannel.1: Gained IPv6LL May 13 00:31:12.118214 update_engine[1418]: I20250513 00:31:12.118141 1418 update_attempter.cc:509] Updating boot flags... May 13 00:31:12.135861 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3065) May 13 00:31:12.163905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3065) May 13 00:31:12.195875 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3065) May 13 00:31:12.892139 kubelet[2414]: E0513 00:31:12.891801 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:22.567728 kubelet[2414]: E0513 00:31:22.566755 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:22.567728 kubelet[2414]: E0513 00:31:22.567558 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:22.568929 containerd[1439]: time="2025-05-13T00:31:22.568238651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbptk,Uid:4020d366-67ac-4fb9-8c36-e05727ce41a8,Namespace:kube-system,Attempt:0,}" May 13 00:31:22.568929 containerd[1439]: time="2025-05-13T00:31:22.568249294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hvrfx,Uid:82bb55cf-fcf8-4be5-9394-37d759a44e39,Namespace:kube-system,Attempt:0,}" May 13 00:31:22.707981 systemd-networkd[1366]: cni0: Link UP May 13 00:31:22.707989 systemd-networkd[1366]: cni0: Gained carrier May 13 00:31:22.713589 systemd-networkd[1366]: cni0: Lost carrier May 13 00:31:22.725021 systemd-networkd[1366]: veth8892fc52: Link UP May 13 00:31:22.725171 systemd-networkd[1366]: vethe81e3c48: Link UP May 13 00:31:22.729894 kernel: cni0: port 1(vethe81e3c48) entered blocking state May 13 00:31:22.729976 kernel: cni0: port 1(vethe81e3c48) entered disabled state May 13 00:31:22.729993 kernel: vethe81e3c48: entered allmulticast mode May 13 00:31:22.730009 kernel: vethe81e3c48: entered promiscuous mode May 13 00:31:22.730024 kernel: cni0: port 1(vethe81e3c48) entered blocking state May 13 00:31:22.730945 kernel: cni0: port 1(vethe81e3c48) entered forwarding state May 13 00:31:22.731888 kernel: cni0: port 2(veth8892fc52) entered blocking state May 13 00:31:22.731958 kernel: cni0: port 2(veth8892fc52) entered disabled state May 13 00:31:22.733101 kernel: veth8892fc52: entered allmulticast mode May 13 00:31:22.733158 kernel: veth8892fc52: entered promiscuous mode May 13 00:31:22.734324 kernel: cni0: port 2(veth8892fc52) entered blocking state May 13 00:31:22.734413 kernel: cni0: port 2(veth8892fc52) entered forwarding state May 13 00:31:22.736875 kernel: cni0: port 2(veth8892fc52) entered disabled state May 13 00:31:22.737861 kernel: cni0: port 1(vethe81e3c48) entered disabled state May 13 00:31:22.743972 kernel: cni0: port 1(vethe81e3c48) entered blocking state May 13 00:31:22.744032 kernel: cni0: port 1(vethe81e3c48) entered forwarding state May 13 00:31:22.743992 systemd-networkd[1366]: vethe81e3c48: Gained carrier May 13 00:31:22.744390 systemd-networkd[1366]: cni0: Gained carrier May 13 00:31:22.750595 kernel: cni0: port 2(veth8892fc52) entered blocking state May 13 00:31:22.750656 kernel: cni0: port 2(veth8892fc52) entered forwarding state May 13 00:31:22.747527 systemd-networkd[1366]: veth8892fc52: Gained carrier May 13 00:31:22.751798 containerd[1439]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 13 00:31:22.751798 containerd[1439]: delegateAdd: netconf sent to delegate plugin: May 13 00:31:22.753314 containerd[1439]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 13 00:31:22.753314 containerd[1439]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 13 00:31:22.753314 containerd[1439]: delegateAdd: netconf sent to delegate plugin: May 13 00:31:22.770251 containerd[1439]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:31:22.769036190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:31:22.770251 containerd[1439]: time="2025-05-13T00:31:22.770058358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:31:22.770251 containerd[1439]: time="2025-05-13T00:31:22.770074523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:22.770251 containerd[1439]: time="2025-05-13T00:31:22.770170790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:22.782426 containerd[1439]: time="2025-05-13T00:31:22.782129762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:31:22.782426 containerd[1439]: time="2025-05-13T00:31:22.782253757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:31:22.782426 containerd[1439]: time="2025-05-13T00:31:22.782272843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:22.782426 containerd[1439]: time="2025-05-13T00:31:22.782397238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:31:22.793209 systemd[1]: Started cri-containerd-be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d.scope - libcontainer container be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d. May 13 00:31:22.801445 systemd[1]: Started cri-containerd-dc1b92186b70e9cb29ab7c3f0ae79717d77adb432318dd35c334965a6687cffe.scope - libcontainer container dc1b92186b70e9cb29ab7c3f0ae79717d77adb432318dd35c334965a6687cffe. May 13 00:31:22.807977 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:31:22.815331 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:31:22.828641 containerd[1439]: time="2025-05-13T00:31:22.827658040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hvrfx,Uid:82bb55cf-fcf8-4be5-9394-37d759a44e39,Namespace:kube-system,Attempt:0,} returns sandbox id \"be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d\"" May 13 00:31:22.829612 kubelet[2414]: E0513 00:31:22.829427 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:22.831244 containerd[1439]: time="2025-05-13T00:31:22.831131819Z" level=info msg="CreateContainer within sandbox \"be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:31:22.835877 containerd[1439]: time="2025-05-13T00:31:22.834977184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbptk,Uid:4020d366-67ac-4fb9-8c36-e05727ce41a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc1b92186b70e9cb29ab7c3f0ae79717d77adb432318dd35c334965a6687cffe\"" May 13 00:31:22.836809 kubelet[2414]: E0513 00:31:22.836782 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:22.838711 containerd[1439]: time="2025-05-13T00:31:22.838678227Z" level=info msg="CreateContainer within sandbox \"dc1b92186b70e9cb29ab7c3f0ae79717d77adb432318dd35c334965a6687cffe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:31:22.843171 containerd[1439]: time="2025-05-13T00:31:22.843125841Z" level=info msg="CreateContainer within sandbox \"be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ca337ac2a8263fd99836454a3209531ef660b90f1b727315dcbd3ade65cae71\"" May 13 00:31:22.844238 containerd[1439]: time="2025-05-13T00:31:22.844203905Z" level=info msg="StartContainer for \"9ca337ac2a8263fd99836454a3209531ef660b90f1b727315dcbd3ade65cae71\"" May 13 00:31:22.851805 containerd[1439]: time="2025-05-13T00:31:22.851755155Z" level=info msg="CreateContainer within sandbox \"dc1b92186b70e9cb29ab7c3f0ae79717d77adb432318dd35c334965a6687cffe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c64ed983d5715964d32462ec143dece9b488ffc82e078d3fd4a40ff83f9d9f4\"" May 13 00:31:22.852465 containerd[1439]: time="2025-05-13T00:31:22.852430905Z" level=info msg="StartContainer for \"2c64ed983d5715964d32462ec143dece9b488ffc82e078d3fd4a40ff83f9d9f4\"" May 13 00:31:22.870048 systemd[1]: Started cri-containerd-9ca337ac2a8263fd99836454a3209531ef660b90f1b727315dcbd3ade65cae71.scope - libcontainer container 9ca337ac2a8263fd99836454a3209531ef660b90f1b727315dcbd3ade65cae71. May 13 00:31:22.874401 systemd[1]: Started cri-containerd-2c64ed983d5715964d32462ec143dece9b488ffc82e078d3fd4a40ff83f9d9f4.scope - libcontainer container 2c64ed983d5715964d32462ec143dece9b488ffc82e078d3fd4a40ff83f9d9f4. May 13 00:31:22.906925 containerd[1439]: time="2025-05-13T00:31:22.905769945Z" level=info msg="StartContainer for \"9ca337ac2a8263fd99836454a3209531ef660b90f1b727315dcbd3ade65cae71\" returns successfully" May 13 00:31:22.918512 containerd[1439]: time="2025-05-13T00:31:22.918455602Z" level=info msg="StartContainer for \"2c64ed983d5715964d32462ec143dece9b488ffc82e078d3fd4a40ff83f9d9f4\" returns successfully" May 13 00:31:23.629796 kubelet[2414]: E0513 00:31:23.629759 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:23.631798 kubelet[2414]: E0513 00:31:23.631748 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:23.682061 systemd[1]: run-containerd-runc-k8s.io-be2ed87e34aa6f48879471b1b5cde500030fc1d67b3c258a26d21ad9b8179e0d-runc.1HW3xc.mount: Deactivated successfully. May 13 00:31:23.805895 kubelet[2414]: I0513 00:31:23.805275 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bbptk" podStartSLOduration=18.805256719 podStartE2EDuration="18.805256719s" podCreationTimestamp="2025-05-13 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:31:23.805189621 +0000 UTC m=+25.325146452" watchObservedRunningTime="2025-05-13 00:31:23.805256719 +0000 UTC m=+25.325213550" May 13 00:31:23.806111 kubelet[2414]: I0513 00:31:23.806076 2414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hvrfx" podStartSLOduration=18.806057816 podStartE2EDuration="18.806057816s" podCreationTimestamp="2025-05-13 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:31:23.767357993 +0000 UTC m=+25.287314864" watchObservedRunningTime="2025-05-13 00:31:23.806057816 +0000 UTC m=+25.326014607" May 13 00:31:23.954046 systemd-networkd[1366]: cni0: Gained IPv6LL May 13 00:31:23.989436 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:46408.service - OpenSSH per-connection server daemon (10.0.0.1:46408). May 13 00:31:24.028965 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 46408 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:24.030689 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:24.034798 systemd-logind[1415]: New session 6 of user core. May 13 00:31:24.050056 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:31:24.082079 systemd-networkd[1366]: vethe81e3c48: Gained IPv6LL May 13 00:31:24.082776 systemd-networkd[1366]: veth8892fc52: Gained IPv6LL May 13 00:31:24.170581 sshd[3351]: pam_unix(sshd:session): session closed for user core May 13 00:31:24.174040 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:46408.service: Deactivated successfully. May 13 00:31:24.175702 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:31:24.177229 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. May 13 00:31:24.178092 systemd-logind[1415]: Removed session 6. May 13 00:31:24.633351 kubelet[2414]: E0513 00:31:24.633240 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:24.633351 kubelet[2414]: E0513 00:31:24.633248 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:25.635467 kubelet[2414]: E0513 00:31:25.635134 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:25.635467 kubelet[2414]: E0513 00:31:25.635266 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:29.194320 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:46410.service - OpenSSH per-connection server daemon (10.0.0.1:46410). May 13 00:31:29.238779 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 46410 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:29.240140 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:29.243744 systemd-logind[1415]: New session 7 of user core. May 13 00:31:29.251090 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:31:29.360649 sshd[3397]: pam_unix(sshd:session): session closed for user core May 13 00:31:29.364602 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:46410.service: Deactivated successfully. May 13 00:31:29.366313 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:31:29.366889 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. May 13 00:31:29.367646 systemd-logind[1415]: Removed session 7. May 13 00:31:34.372986 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). May 13 00:31:34.415302 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:34.416672 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:34.420621 systemd-logind[1415]: New session 8 of user core. May 13 00:31:34.445056 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:31:34.553405 sshd[3433]: pam_unix(sshd:session): session closed for user core May 13 00:31:34.563731 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:55728.service: Deactivated successfully. May 13 00:31:34.566115 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:31:34.568300 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. May 13 00:31:34.569656 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:55730.service - OpenSSH per-connection server daemon (10.0.0.1:55730). May 13 00:31:34.571000 systemd-logind[1415]: Removed session 8. May 13 00:31:34.606716 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 55730 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:34.608121 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:34.611879 systemd-logind[1415]: New session 9 of user core. May 13 00:31:34.628007 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:31:34.759200 sshd[3448]: pam_unix(sshd:session): session closed for user core May 13 00:31:34.769135 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:55730.service: Deactivated successfully. May 13 00:31:34.771492 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:31:34.774594 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. May 13 00:31:34.784213 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:55744.service - OpenSSH per-connection server daemon (10.0.0.1:55744). May 13 00:31:34.785696 systemd-logind[1415]: Removed session 9. May 13 00:31:34.816679 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 55744 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:34.817938 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:34.822207 systemd-logind[1415]: New session 10 of user core. May 13 00:31:34.828082 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:31:34.932079 sshd[3461]: pam_unix(sshd:session): session closed for user core May 13 00:31:34.935917 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:55744.service: Deactivated successfully. May 13 00:31:34.937588 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:31:34.938231 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. May 13 00:31:34.939065 systemd-logind[1415]: Removed session 10. May 13 00:31:39.943722 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:55760.service - OpenSSH per-connection server daemon (10.0.0.1:55760). May 13 00:31:39.979701 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 55760 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:39.981000 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:39.985174 systemd-logind[1415]: New session 11 of user core. May 13 00:31:39.994996 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:31:40.122233 sshd[3499]: pam_unix(sshd:session): session closed for user core May 13 00:31:40.129151 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:55760.service: Deactivated successfully. May 13 00:31:40.131336 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:31:40.132650 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. May 13 00:31:40.149718 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). May 13 00:31:40.151180 systemd-logind[1415]: Removed session 11. May 13 00:31:40.181561 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:40.183196 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:40.187593 systemd-logind[1415]: New session 12 of user core. May 13 00:31:40.194008 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:31:40.421280 sshd[3513]: pam_unix(sshd:session): session closed for user core May 13 00:31:40.431538 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:55774.service: Deactivated successfully. May 13 00:31:40.433201 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:31:40.435154 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. May 13 00:31:40.436971 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:55790.service - OpenSSH per-connection server daemon (10.0.0.1:55790). May 13 00:31:40.437932 systemd-logind[1415]: Removed session 12. May 13 00:31:40.473661 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 55790 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:40.475098 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:40.479759 systemd-logind[1415]: New session 13 of user core. May 13 00:31:40.487029 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:31:41.733011 sshd[3525]: pam_unix(sshd:session): session closed for user core May 13 00:31:41.744740 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:55790.service: Deactivated successfully. May 13 00:31:41.748765 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:31:41.750597 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. May 13 00:31:41.751695 systemd-logind[1415]: Removed session 13. May 13 00:31:41.757089 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:55794.service - OpenSSH per-connection server daemon (10.0.0.1:55794). May 13 00:31:41.793513 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 55794 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:41.794815 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:41.798450 systemd-logind[1415]: New session 14 of user core. May 13 00:31:41.809154 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:31:42.021471 sshd[3565]: pam_unix(sshd:session): session closed for user core May 13 00:31:42.030130 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:55794.service: Deactivated successfully. May 13 00:31:42.032110 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:31:42.034017 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. May 13 00:31:42.045157 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:55806.service - OpenSSH per-connection server daemon (10.0.0.1:55806). May 13 00:31:42.046294 systemd-logind[1415]: Removed session 14. May 13 00:31:42.086459 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 55806 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:42.088020 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:42.092016 systemd-logind[1415]: New session 15 of user core. May 13 00:31:42.103051 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:31:42.214020 sshd[3578]: pam_unix(sshd:session): session closed for user core May 13 00:31:42.217469 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:55806.service: Deactivated successfully. May 13 00:31:42.219089 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:31:42.220374 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. May 13 00:31:42.221316 systemd-logind[1415]: Removed session 15. May 13 00:31:47.237089 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:37162.service - OpenSSH per-connection server daemon (10.0.0.1:37162). May 13 00:31:47.273816 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 37162 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:47.275434 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:47.280921 systemd-logind[1415]: New session 16 of user core. May 13 00:31:47.291007 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:31:47.403764 sshd[3617]: pam_unix(sshd:session): session closed for user core May 13 00:31:47.407009 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:37162.service: Deactivated successfully. May 13 00:31:47.408716 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:31:47.409918 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. May 13 00:31:47.410782 systemd-logind[1415]: Removed session 16. May 13 00:31:52.415427 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:37170.service - OpenSSH per-connection server daemon (10.0.0.1:37170). May 13 00:31:52.454860 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 37170 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:52.456070 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:52.460583 systemd-logind[1415]: New session 17 of user core. May 13 00:31:52.470020 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:31:52.588184 sshd[3653]: pam_unix(sshd:session): session closed for user core May 13 00:31:52.592212 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:37170.service: Deactivated successfully. May 13 00:31:52.595857 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:31:52.598637 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. May 13 00:31:52.599602 systemd-logind[1415]: Removed session 17. May 13 00:31:57.600561 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:51592.service - OpenSSH per-connection server daemon (10.0.0.1:51592). May 13 00:31:57.648579 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 51592 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:31:57.650015 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:57.653550 systemd-logind[1415]: New session 18 of user core. May 13 00:31:57.664009 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:31:57.783937 sshd[3689]: pam_unix(sshd:session): session closed for user core May 13 00:31:57.786719 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:51592.service: Deactivated successfully. May 13 00:31:57.788290 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:31:57.791276 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. May 13 00:31:57.792477 systemd-logind[1415]: Removed session 18.