May 9 00:06:48.914933 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 00:06:48.914955 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 9 00:06:48.914966 kernel: KASLR enabled May 9 00:06:48.914972 kernel: efi: EFI v2.7 by EDK II May 9 00:06:48.914977 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 00:06:48.914983 kernel: random: crng init done May 9 00:06:48.914990 kernel: secureboot: Secure boot disabled May 9 00:06:48.914996 kernel: ACPI: Early table checksum verification disabled May 9 00:06:48.915002 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 00:06:48.915010 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 00:06:48.915016 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915022 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915028 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915034 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915042 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915049 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915056 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915063 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915069 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:06:48.915075 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 00:06:48.915082 kernel: NUMA: Failed to initialise from firmware May 9 00:06:48.915088 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:06:48.915094 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 9 00:06:48.915101 kernel: Zone ranges: May 9 00:06:48.915107 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:06:48.915114 kernel: DMA32 empty May 9 00:06:48.915120 kernel: Normal empty May 9 00:06:48.915127 kernel: Movable zone start for each node May 9 00:06:48.915133 kernel: Early memory node ranges May 9 00:06:48.915139 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 00:06:48.915145 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 00:06:48.915152 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 00:06:48.915158 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 00:06:48.915165 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 00:06:48.915171 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 00:06:48.915177 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 00:06:48.915184 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:06:48.915192 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 00:06:48.915198 kernel: psci: probing for conduit method from ACPI. May 9 00:06:48.915204 kernel: psci: PSCIv1.1 detected in firmware. May 9 00:06:48.915214 kernel: psci: Using standard PSCI v0.2 function IDs May 9 00:06:48.915220 kernel: psci: Trusted OS migration not required May 9 00:06:48.915227 kernel: psci: SMC Calling Convention v1.1 May 9 00:06:48.915235 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 00:06:48.915242 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 00:06:48.915249 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 00:06:48.915256 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 00:06:48.915262 kernel: Detected PIPT I-cache on CPU0 May 9 00:06:48.915269 kernel: CPU features: detected: GIC system register CPU interface May 9 00:06:48.915276 kernel: CPU features: detected: Hardware dirty bit management May 9 00:06:48.915282 kernel: CPU features: detected: Spectre-v4 May 9 00:06:48.915289 kernel: CPU features: detected: Spectre-BHB May 9 00:06:48.915295 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 00:06:48.915303 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 00:06:48.915310 kernel: CPU features: detected: ARM erratum 1418040 May 9 00:06:48.915316 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 00:06:48.915323 kernel: alternatives: applying boot alternatives May 9 00:06:48.915330 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 9 00:06:48.915337 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:06:48.915344 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:06:48.915351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:06:48.915357 kernel: Fallback order for Node 0: 0 May 9 00:06:48.915364 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 00:06:48.915370 kernel: Policy zone: DMA May 9 00:06:48.915378 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:06:48.915385 kernel: software IO TLB: area num 4. May 9 00:06:48.915392 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 00:06:48.915399 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 9 00:06:48.915406 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:06:48.915412 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:06:48.915420 kernel: rcu: RCU event tracing is enabled. May 9 00:06:48.915426 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:06:48.915433 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:06:48.915440 kernel: Tracing variant of Tasks RCU enabled. May 9 00:06:48.915446 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:06:48.915453 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:06:48.915461 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 00:06:48.915467 kernel: GICv3: 256 SPIs implemented May 9 00:06:48.915474 kernel: GICv3: 0 Extended SPIs implemented May 9 00:06:48.915480 kernel: Root IRQ handler: gic_handle_irq May 9 00:06:48.915487 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 00:06:48.915493 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 00:06:48.915500 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 00:06:48.915506 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 00:06:48.915513 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 00:06:48.915520 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 00:06:48.915526 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 00:06:48.915535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:06:48.915541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:06:48.915548 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 00:06:48.915554 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 00:06:48.915561 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 00:06:48.915568 kernel: arm-pv: using stolen time PV May 9 00:06:48.915575 kernel: Console: colour dummy device 80x25 May 9 00:06:48.915581 kernel: ACPI: Core revision 20230628 May 9 00:06:48.915588 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 00:06:48.915613 kernel: pid_max: default: 32768 minimum: 301 May 9 00:06:48.915624 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:06:48.915630 kernel: landlock: Up and running. May 9 00:06:48.915637 kernel: SELinux: Initializing. May 9 00:06:48.915644 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:06:48.915651 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:06:48.915658 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 00:06:48.915665 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:06:48.915672 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:06:48.915679 kernel: rcu: Hierarchical SRCU implementation. May 9 00:06:48.915687 kernel: rcu: Max phase no-delay instances is 400. May 9 00:06:48.915694 kernel: Platform MSI: ITS@0x8080000 domain created May 9 00:06:48.915701 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 00:06:48.915707 kernel: Remapping and enabling EFI services. May 9 00:06:48.915714 kernel: smp: Bringing up secondary CPUs ... May 9 00:06:48.915721 kernel: Detected PIPT I-cache on CPU1 May 9 00:06:48.915728 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 00:06:48.915735 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 00:06:48.915742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:06:48.915748 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 00:06:48.915756 kernel: Detected PIPT I-cache on CPU2 May 9 00:06:48.915763 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 00:06:48.915774 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 00:06:48.915783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:06:48.915790 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 00:06:48.915797 kernel: Detected PIPT I-cache on CPU3 May 9 00:06:48.915804 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 00:06:48.915811 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 00:06:48.915818 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:06:48.915825 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 00:06:48.915834 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:06:48.915841 kernel: SMP: Total of 4 processors activated. May 9 00:06:48.915848 kernel: CPU features: detected: 32-bit EL0 Support May 9 00:06:48.915855 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 00:06:48.915863 kernel: CPU features: detected: Common not Private translations May 9 00:06:48.915870 kernel: CPU features: detected: CRC32 instructions May 9 00:06:48.915883 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 00:06:48.915892 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 00:06:48.915899 kernel: CPU features: detected: LSE atomic instructions May 9 00:06:48.915906 kernel: CPU features: detected: Privileged Access Never May 9 00:06:48.915913 kernel: CPU features: detected: RAS Extension Support May 9 00:06:48.915921 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 00:06:48.915928 kernel: CPU: All CPU(s) started at EL1 May 9 00:06:48.915935 kernel: alternatives: applying system-wide alternatives May 9 00:06:48.915942 kernel: devtmpfs: initialized May 9 00:06:48.915949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:06:48.915958 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:06:48.915965 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:06:48.915972 kernel: SMBIOS 3.0.0 present. May 9 00:06:48.915979 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 00:06:48.915987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:06:48.915994 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 00:06:48.916001 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 00:06:48.916008 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 00:06:48.916015 kernel: audit: initializing netlink subsys (disabled) May 9 00:06:48.916023 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 00:06:48.916031 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:06:48.916038 kernel: cpuidle: using governor menu May 9 00:06:48.916045 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 00:06:48.916052 kernel: ASID allocator initialised with 32768 entries May 9 00:06:48.916059 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:06:48.916066 kernel: Serial: AMBA PL011 UART driver May 9 00:06:48.916073 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 00:06:48.916080 kernel: Modules: 0 pages in range for non-PLT usage May 9 00:06:48.916089 kernel: Modules: 508944 pages in range for PLT usage May 9 00:06:48.916096 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:06:48.916103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:06:48.916110 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 00:06:48.916117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 00:06:48.916124 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:06:48.916131 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:06:48.916138 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 00:06:48.916146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 00:06:48.916154 kernel: ACPI: Added _OSI(Module Device) May 9 00:06:48.916161 kernel: ACPI: Added _OSI(Processor Device) May 9 00:06:48.916168 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:06:48.916175 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:06:48.916182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:06:48.916189 kernel: ACPI: Interpreter enabled May 9 00:06:48.916196 kernel: ACPI: Using GIC for interrupt routing May 9 00:06:48.916203 kernel: ACPI: MCFG table detected, 1 entries May 9 00:06:48.916210 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 00:06:48.916219 kernel: printk: console [ttyAMA0] enabled May 9 00:06:48.916226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:06:48.916361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:06:48.916432 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 00:06:48.916497 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 00:06:48.916559 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 00:06:48.916714 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 00:06:48.916731 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 00:06:48.916739 kernel: PCI host bridge to bus 0000:00 May 9 00:06:48.916811 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 00:06:48.916868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 00:06:48.916941 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 00:06:48.917001 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:06:48.917080 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 00:06:48.917157 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:06:48.917222 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 00:06:48.917286 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 00:06:48.917350 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:06:48.917413 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:06:48.917475 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 00:06:48.917537 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 00:06:48.917606 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 00:06:48.917669 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 00:06:48.917726 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 00:06:48.917736 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 00:06:48.917743 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 00:06:48.917750 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 00:06:48.917757 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 00:06:48.917764 kernel: iommu: Default domain type: Translated May 9 00:06:48.917774 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 00:06:48.917781 kernel: efivars: Registered efivars operations May 9 00:06:48.917788 kernel: vgaarb: loaded May 9 00:06:48.917795 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 00:06:48.917802 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:06:48.917809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:06:48.917817 kernel: pnp: PnP ACPI init May 9 00:06:48.917904 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 00:06:48.917918 kernel: pnp: PnP ACPI: found 1 devices May 9 00:06:48.917925 kernel: NET: Registered PF_INET protocol family May 9 00:06:48.917932 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:06:48.917939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:06:48.917946 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:06:48.917954 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:06:48.917961 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:06:48.917968 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:06:48.917975 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:06:48.917984 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:06:48.917991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:06:48.917998 kernel: PCI: CLS 0 bytes, default 64 May 9 00:06:48.918006 kernel: kvm [1]: HYP mode not available May 9 00:06:48.918013 kernel: Initialise system trusted keyrings May 9 00:06:48.918020 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:06:48.918028 kernel: Key type asymmetric registered May 9 00:06:48.918035 kernel: Asymmetric key parser 'x509' registered May 9 00:06:48.918042 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 00:06:48.918051 kernel: io scheduler mq-deadline registered May 9 00:06:48.918058 kernel: io scheduler kyber registered May 9 00:06:48.918065 kernel: io scheduler bfq registered May 9 00:06:48.918073 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 00:06:48.918080 kernel: ACPI: button: Power Button [PWRB] May 9 00:06:48.918088 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 00:06:48.918161 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 00:06:48.918171 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:06:48.918179 kernel: thunder_xcv, ver 1.0 May 9 00:06:48.918188 kernel: thunder_bgx, ver 1.0 May 9 00:06:48.918195 kernel: nicpf, ver 1.0 May 9 00:06:48.918205 kernel: nicvf, ver 1.0 May 9 00:06:48.918288 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 00:06:48.918364 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T00:06:48 UTC (1746749208) May 9 00:06:48.918374 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 00:06:48.918381 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 00:06:48.918389 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 00:06:48.918398 kernel: watchdog: Hard watchdog permanently disabled May 9 00:06:48.918405 kernel: NET: Registered PF_INET6 protocol family May 9 00:06:48.918412 kernel: Segment Routing with IPv6 May 9 00:06:48.918419 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:06:48.918426 kernel: NET: Registered PF_PACKET protocol family May 9 00:06:48.918434 kernel: Key type dns_resolver registered May 9 00:06:48.918441 kernel: registered taskstats version 1 May 9 00:06:48.918448 kernel: Loading compiled-in X.509 certificates May 9 00:06:48.918455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 9 00:06:48.918463 kernel: Key type .fscrypt registered May 9 00:06:48.918470 kernel: Key type fscrypt-provisioning registered May 9 00:06:48.918477 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:06:48.918485 kernel: ima: Allocated hash algorithm: sha1 May 9 00:06:48.918492 kernel: ima: No architecture policies found May 9 00:06:48.918499 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 00:06:48.918506 kernel: clk: Disabling unused clocks May 9 00:06:48.918513 kernel: Freeing unused kernel memory: 39744K May 9 00:06:48.918520 kernel: Run /init as init process May 9 00:06:48.918529 kernel: with arguments: May 9 00:06:48.918535 kernel: /init May 9 00:06:48.918543 kernel: with environment: May 9 00:06:48.918549 kernel: HOME=/ May 9 00:06:48.918556 kernel: TERM=linux May 9 00:06:48.918563 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:06:48.918572 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:06:48.918582 systemd[1]: Detected virtualization kvm. May 9 00:06:48.918591 systemd[1]: Detected architecture arm64. May 9 00:06:48.918607 systemd[1]: Running in initrd. May 9 00:06:48.918614 systemd[1]: No hostname configured, using default hostname. May 9 00:06:48.918622 systemd[1]: Hostname set to . May 9 00:06:48.918630 systemd[1]: Initializing machine ID from VM UUID. May 9 00:06:48.918637 systemd[1]: Queued start job for default target initrd.target. May 9 00:06:48.918645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:06:48.918652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:06:48.918662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:06:48.918670 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:06:48.918678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:06:48.918686 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:06:48.918695 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:06:48.918702 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:06:48.918712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:06:48.918719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:06:48.918727 systemd[1]: Reached target paths.target - Path Units. May 9 00:06:48.918734 systemd[1]: Reached target slices.target - Slice Units. May 9 00:06:48.918742 systemd[1]: Reached target swap.target - Swaps. May 9 00:06:48.918750 systemd[1]: Reached target timers.target - Timer Units. May 9 00:06:48.918757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:06:48.918765 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:06:48.918773 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:06:48.918782 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:06:48.918789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:06:48.918797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:06:48.918805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:06:48.918812 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:06:48.918820 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:06:48.918827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:06:48.918835 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:06:48.918842 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:06:48.918851 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:06:48.918859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:06:48.918867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:06:48.918880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:06:48.918888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:06:48.918896 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:06:48.918906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:06:48.918931 systemd-journald[239]: Collecting audit messages is disabled. May 9 00:06:48.918952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:06:48.918961 systemd-journald[239]: Journal started May 9 00:06:48.918979 systemd-journald[239]: Runtime Journal (/run/log/journal/93c29b3e34244a61b424bb1eff684252) is 5.9M, max 47.3M, 41.4M free. May 9 00:06:48.928677 kernel: Bridge firewalling registered May 9 00:06:48.907794 systemd-modules-load[240]: Inserted module 'overlay' May 9 00:06:48.930205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:06:48.921563 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 00:06:48.933247 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:06:48.933680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:06:48.934951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:06:48.949798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:06:48.954740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:06:48.956061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:06:48.957650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:06:48.964313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:06:48.967403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:06:48.969591 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:06:48.970723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:06:48.973548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:06:48.976396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:06:48.984882 dracut-cmdline[276]: dracut-dracut-053 May 9 00:06:48.991696 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 9 00:06:49.015670 systemd-resolved[280]: Positive Trust Anchors: May 9 00:06:49.015738 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:06:49.015769 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:06:49.024503 systemd-resolved[280]: Defaulting to hostname 'linux'. May 9 00:06:49.025483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:06:49.027214 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:06:49.053623 kernel: SCSI subsystem initialized May 9 00:06:49.057611 kernel: Loading iSCSI transport class v2.0-870. May 9 00:06:49.064623 kernel: iscsi: registered transport (tcp) May 9 00:06:49.076616 kernel: iscsi: registered transport (qla4xxx) May 9 00:06:49.076630 kernel: QLogic iSCSI HBA Driver May 9 00:06:49.118209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:06:49.131813 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:06:49.150274 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:06:49.150350 kernel: device-mapper: uevent: version 1.0.3 May 9 00:06:49.151138 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:06:49.196642 kernel: raid6: neonx8 gen() 15760 MB/s May 9 00:06:49.213629 kernel: raid6: neonx4 gen() 15641 MB/s May 9 00:06:49.230611 kernel: raid6: neonx2 gen() 13227 MB/s May 9 00:06:49.247613 kernel: raid6: neonx1 gen() 10500 MB/s May 9 00:06:49.264622 kernel: raid6: int64x8 gen() 6209 MB/s May 9 00:06:49.281622 kernel: raid6: int64x4 gen() 7357 MB/s May 9 00:06:49.298613 kernel: raid6: int64x2 gen() 6130 MB/s May 9 00:06:49.315617 kernel: raid6: int64x1 gen() 5059 MB/s May 9 00:06:49.315641 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s May 9 00:06:49.332616 kernel: raid6: .... xor() 11933 MB/s, rmw enabled May 9 00:06:49.332629 kernel: raid6: using neon recovery algorithm May 9 00:06:49.339735 kernel: xor: measuring software checksum speed May 9 00:06:49.339761 kernel: 8regs : 19802 MB/sec May 9 00:06:49.339778 kernel: 32regs : 19669 MB/sec May 9 00:06:49.340649 kernel: arm64_neon : 27034 MB/sec May 9 00:06:49.340665 kernel: xor: using function: arm64_neon (27034 MB/sec) May 9 00:06:49.389640 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:06:49.400026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:06:49.412768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:06:49.423934 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 9 00:06:49.426984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:06:49.429272 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:06:49.443437 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 9 00:06:49.469654 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:06:49.480737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:06:49.520630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:06:49.525983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:06:49.539686 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:06:49.541031 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:06:49.542626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:06:49.544766 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:06:49.549736 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:06:49.560632 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:06:49.569320 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 00:06:49.569454 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:06:49.570859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:06:49.576023 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:06:49.576042 kernel: GPT:9289727 != 19775487 May 9 00:06:49.576051 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:06:49.570985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:06:49.585128 kernel: GPT:9289727 != 19775487 May 9 00:06:49.585154 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:06:49.585164 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:06:49.585167 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:06:49.586312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:06:49.586447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:06:49.594494 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) May 9 00:06:49.590137 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:06:49.598612 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (527) May 9 00:06:49.603809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:06:49.614682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:06:49.615743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:06:49.626048 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:06:49.630188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:06:49.633763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:06:49.634625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:06:49.648747 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:06:49.650320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:06:49.655586 disk-uuid[553]: Primary Header is updated. May 9 00:06:49.655586 disk-uuid[553]: Secondary Entries is updated. May 9 00:06:49.655586 disk-uuid[553]: Secondary Header is updated. May 9 00:06:49.658616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:06:49.673462 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:06:50.668630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:06:50.669080 disk-uuid[554]: The operation has completed successfully. May 9 00:06:50.691178 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:06:50.691273 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:06:50.706785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:06:50.709585 sh[575]: Success May 9 00:06:50.718625 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 00:06:50.745350 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:06:50.759988 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:06:50.761410 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:06:50.771185 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 9 00:06:50.771218 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 00:06:50.771228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:06:50.771963 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:06:50.772977 kernel: BTRFS info (device dm-0): using free space tree May 9 00:06:50.776191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:06:50.777496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:06:50.778252 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:06:50.780303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:06:50.790219 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 9 00:06:50.790261 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:06:50.790271 kernel: BTRFS info (device vda6): using free space tree May 9 00:06:50.792626 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:06:50.799064 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:06:50.800652 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 9 00:06:50.806399 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:06:50.814828 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:06:50.872039 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:06:50.880837 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:06:50.904186 systemd-networkd[759]: lo: Link UP May 9 00:06:50.904195 systemd-networkd[759]: lo: Gained carrier May 9 00:06:50.904944 systemd-networkd[759]: Enumeration completed May 9 00:06:50.905214 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:06:50.906378 systemd[1]: Reached target network.target - Network. May 9 00:06:50.906670 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:06:50.906674 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:06:50.911386 ignition[669]: Ignition 2.20.0 May 9 00:06:50.907484 systemd-networkd[759]: eth0: Link UP May 9 00:06:50.911392 ignition[669]: Stage: fetch-offline May 9 00:06:50.907487 systemd-networkd[759]: eth0: Gained carrier May 9 00:06:50.911428 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 9 00:06:50.907494 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:06:50.911436 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:50.911583 ignition[669]: parsed url from cmdline: "" May 9 00:06:50.911586 ignition[669]: no config URL provided May 9 00:06:50.911591 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:06:50.911612 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 9 00:06:50.911638 ignition[669]: op(1): [started] loading QEMU firmware config module May 9 00:06:50.920646 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:06:50.911642 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:06:50.919162 ignition[669]: op(1): [finished] loading QEMU firmware config module May 9 00:06:50.942186 ignition[669]: parsing config with SHA512: 9f5e7e8e381c09baaa35f50bbd1910b01d0b6d62041bf98792744e614ab6fe64507854948dd42c72883ca95a0cd5c27946c751e1fa64021b779bda36704a96d7 May 9 00:06:50.946881 unknown[669]: fetched base config from "system" May 9 00:06:50.946894 unknown[669]: fetched user config from "qemu" May 9 00:06:50.947393 ignition[669]: fetch-offline: fetch-offline passed May 9 00:06:50.947480 ignition[669]: Ignition finished successfully May 9 00:06:50.948858 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:06:50.950550 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:06:50.961739 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:06:50.972271 ignition[771]: Ignition 2.20.0 May 9 00:06:50.972281 ignition[771]: Stage: kargs May 9 00:06:50.972429 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 9 00:06:50.972438 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:50.973292 ignition[771]: kargs: kargs passed May 9 00:06:50.973334 ignition[771]: Ignition finished successfully May 9 00:06:50.975862 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:06:50.986769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:06:50.997836 ignition[780]: Ignition 2.20.0 May 9 00:06:50.997846 ignition[780]: Stage: disks May 9 00:06:50.998020 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 9 00:06:50.998030 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:50.998901 ignition[780]: disks: disks passed May 9 00:06:50.998951 ignition[780]: Ignition finished successfully May 9 00:06:51.000547 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:06:51.001809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:06:51.002622 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:06:51.004054 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:06:51.005364 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:06:51.006870 systemd[1]: Reached target basic.target - Basic System. May 9 00:06:51.021740 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:06:51.031358 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:06:51.034491 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:06:51.036262 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:06:51.081617 kernel: EXT4-fs (vda9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 9 00:06:51.082318 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:06:51.083382 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:06:51.094670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:06:51.096151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:06:51.097324 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:06:51.097363 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:06:51.103690 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) May 9 00:06:51.103712 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 9 00:06:51.097385 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:06:51.107159 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:06:51.107179 kernel: BTRFS info (device vda6): using free space tree May 9 00:06:51.101588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:06:51.106916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:06:51.111646 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:06:51.112784 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:06:51.150347 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:06:51.154522 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 9 00:06:51.158623 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:06:51.162544 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:06:51.229792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:06:51.239719 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:06:51.241039 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:06:51.246626 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 9 00:06:51.260530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:06:51.263106 ignition[914]: INFO : Ignition 2.20.0 May 9 00:06:51.263106 ignition[914]: INFO : Stage: mount May 9 00:06:51.264286 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:06:51.264286 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:51.264286 ignition[914]: INFO : mount: mount passed May 9 00:06:51.264286 ignition[914]: INFO : Ignition finished successfully May 9 00:06:51.265288 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:06:51.275742 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:06:51.770435 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:06:51.788765 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:06:51.794828 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) May 9 00:06:51.794859 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 9 00:06:51.794874 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:06:51.796038 kernel: BTRFS info (device vda6): using free space tree May 9 00:06:51.799613 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:06:51.800797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:06:51.828982 ignition[943]: INFO : Ignition 2.20.0 May 9 00:06:51.828982 ignition[943]: INFO : Stage: files May 9 00:06:51.830403 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:06:51.830403 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:51.830403 ignition[943]: DEBUG : files: compiled without relabeling support, skipping May 9 00:06:51.840428 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:06:51.840428 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:06:51.842663 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:06:51.842663 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:06:51.844680 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:06:51.844680 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 00:06:51.844680 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 00:06:51.843016 unknown[943]: wrote ssh authorized keys file for user: core May 9 00:06:51.889745 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:06:52.080689 systemd-networkd[759]: eth0: Gained IPv6LL May 9 00:06:52.085461 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 00:06:52.086961 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 00:06:52.382214 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 00:06:52.715233 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 00:06:52.715233 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 9 00:06:52.717991 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:06:52.747203 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:06:52.751319 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:06:52.753635 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:06:52.753635 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 9 00:06:52.753635 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:06:52.753635 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:06:52.753635 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:06:52.753635 ignition[943]: INFO : files: files passed May 9 00:06:52.753635 ignition[943]: INFO : Ignition finished successfully May 9 00:06:52.754143 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:06:52.765860 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:06:52.768919 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:06:52.771185 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:06:52.771295 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:06:52.777430 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:06:52.779634 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:06:52.781133 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:06:52.782699 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:06:52.785659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:06:52.786840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:06:52.800748 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:06:52.819942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:06:52.820052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:06:52.821907 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:06:52.825395 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:06:52.826999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:06:52.827775 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:06:52.844862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:06:52.855755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:06:52.864689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:06:52.865645 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:06:52.867526 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:06:52.869117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:06:52.869243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:06:52.871456 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:06:52.872332 systemd[1]: Stopped target basic.target - Basic System. May 9 00:06:52.873990 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:06:52.875569 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:06:52.877132 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:06:52.878880 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:06:52.880478 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:06:52.882261 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:06:52.883761 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:06:52.885461 systemd[1]: Stopped target swap.target - Swaps. May 9 00:06:52.886866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:06:52.886996 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:06:52.889138 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:06:52.890758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:06:52.892359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:06:52.896630 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:06:52.897670 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:06:52.897792 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:06:52.900399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:06:52.900509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:06:52.902228 systemd[1]: Stopped target paths.target - Path Units. May 9 00:06:52.903580 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:06:52.907655 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:06:52.908668 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:06:52.910655 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:06:52.912167 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:06:52.912259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:06:52.913726 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:06:52.913803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:06:52.915190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:06:52.915298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:06:52.916854 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:06:52.916966 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:06:52.925820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:06:52.926535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:06:52.926676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:06:52.928349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:06:52.929516 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:06:52.929653 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:06:52.931283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:06:52.931369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:06:52.936314 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:06:52.944136 ignition[998]: INFO : Ignition 2.20.0 May 9 00:06:52.944136 ignition[998]: INFO : Stage: umount May 9 00:06:52.944136 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:06:52.944136 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:06:52.944136 ignition[998]: INFO : umount: umount passed May 9 00:06:52.944136 ignition[998]: INFO : Ignition finished successfully May 9 00:06:52.937680 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:06:52.941062 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:06:52.941157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:06:52.942962 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:06:52.944062 systemd[1]: Stopped target network.target - Network. May 9 00:06:52.944869 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:06:52.944941 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:06:52.946360 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:06:52.946406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:06:52.947560 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:06:52.947622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:06:52.949084 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:06:52.949122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:06:52.951040 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:06:52.954396 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:06:52.963549 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:06:52.963681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:06:52.965398 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:06:52.965461 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:06:52.966666 systemd-networkd[759]: eth0: DHCPv6 lease lost May 9 00:06:52.968136 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:06:52.968251 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:06:52.969355 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:06:52.969388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:06:52.976702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:06:52.977769 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:06:52.977825 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:06:52.979274 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:06:52.979317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:06:52.980683 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:06:52.980720 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:06:52.982140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:06:52.990128 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:06:52.990274 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:06:52.992029 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:06:52.992067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:06:52.993376 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:06:52.993405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:06:52.994822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:06:52.994865 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:06:52.996945 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:06:52.996991 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:06:52.999149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:06:52.999197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:06:53.008762 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:06:53.009522 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:06:53.009570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:06:53.011197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:06:53.011238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:06:53.012395 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:06:53.012494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:06:53.014033 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:06:53.014120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:06:53.016347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:06:53.016430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:06:53.018785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:06:53.020047 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:06:53.020102 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:06:53.022140 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:06:53.031567 systemd[1]: Switching root. May 9 00:06:53.053460 systemd-journald[239]: Journal stopped May 9 00:06:53.737450 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 9 00:06:53.737507 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:06:53.737519 kernel: SELinux: policy capability open_perms=1 May 9 00:06:53.737532 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:06:53.737542 kernel: SELinux: policy capability always_check_network=0 May 9 00:06:53.737552 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:06:53.737562 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:06:53.737573 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:06:53.737583 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:06:53.737592 kernel: audit: type=1403 audit(1746749213.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:06:53.737619 systemd[1]: Successfully loaded SELinux policy in 36.847ms. May 9 00:06:53.737641 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.598ms. May 9 00:06:53.737652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:06:53.737663 systemd[1]: Detected virtualization kvm. May 9 00:06:53.737674 systemd[1]: Detected architecture arm64. May 9 00:06:53.737685 systemd[1]: Detected first boot. May 9 00:06:53.737697 systemd[1]: Initializing machine ID from VM UUID. May 9 00:06:53.737707 zram_generator::config[1043]: No configuration found. May 9 00:06:53.737718 systemd[1]: Populated /etc with preset unit settings. May 9 00:06:53.737732 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:06:53.737743 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:06:53.737753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:06:53.737764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:06:53.737775 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:06:53.737787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:06:53.737797 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:06:53.737807 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:06:53.737818 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:06:53.737828 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:06:53.737838 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:06:53.737848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:06:53.737858 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:06:53.737869 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:06:53.737889 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:06:53.737902 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:06:53.737914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:06:53.737925 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 00:06:53.737935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:06:53.737945 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:06:53.737955 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:06:53.737965 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:06:53.737978 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:06:53.737988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:06:53.737998 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:06:53.738008 systemd[1]: Reached target slices.target - Slice Units. May 9 00:06:53.738019 systemd[1]: Reached target swap.target - Swaps. May 9 00:06:53.738029 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:06:53.738039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:06:53.738049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:06:53.738059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:06:53.738070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:06:53.738081 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:06:53.738092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:06:53.738102 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:06:53.738112 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:06:53.738122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:06:53.738136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:06:53.738147 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:06:53.738157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:06:53.738169 systemd[1]: Reached target machines.target - Containers. May 9 00:06:53.738179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:06:53.738190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:06:53.738200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:06:53.738210 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:06:53.738220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:06:53.738231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:06:53.738242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:06:53.738253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:06:53.738263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:06:53.738275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:06:53.738286 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:06:53.738296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:06:53.738306 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:06:53.738316 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:06:53.738326 kernel: fuse: init (API version 7.39) May 9 00:06:53.738335 kernel: loop: module loaded May 9 00:06:53.738346 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:06:53.738357 kernel: ACPI: bus type drm_connector registered May 9 00:06:53.738367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:06:53.738377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:06:53.738400 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:06:53.738427 systemd-journald[1110]: Collecting audit messages is disabled. May 9 00:06:53.738450 systemd-journald[1110]: Journal started May 9 00:06:53.738472 systemd-journald[1110]: Runtime Journal (/run/log/journal/93c29b3e34244a61b424bb1eff684252) is 5.9M, max 47.3M, 41.4M free. May 9 00:06:53.553116 systemd[1]: Queued start job for default target multi-user.target. May 9 00:06:53.568180 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:06:53.568554 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:06:53.740646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:06:53.740680 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:06:53.741869 systemd[1]: Stopped verity-setup.service. May 9 00:06:53.745612 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:06:53.746122 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:06:53.747027 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:06:53.747930 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:06:53.748911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:06:53.749851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:06:53.750847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:06:53.753639 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:06:53.754766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:06:53.756019 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:06:53.756164 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:06:53.757357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:06:53.757498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:06:53.758725 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:06:53.758871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:06:53.761043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:06:53.761207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:06:53.762438 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:06:53.762574 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:06:53.763642 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:06:53.763814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:06:53.764900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:06:53.766164 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:06:53.767394 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:06:53.779367 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:06:53.790734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:06:53.792679 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:06:53.793510 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:06:53.793550 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:06:53.795348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:06:53.797341 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:06:53.799652 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:06:53.800502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:06:53.801816 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:06:53.804520 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:06:53.805526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:06:53.807425 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:06:53.808389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:06:53.811532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:06:53.813239 systemd-journald[1110]: Time spent on flushing to /var/log/journal/93c29b3e34244a61b424bb1eff684252 is 20.481ms for 854 entries. May 9 00:06:53.813239 systemd-journald[1110]: System Journal (/var/log/journal/93c29b3e34244a61b424bb1eff684252) is 8.0M, max 195.6M, 187.6M free. May 9 00:06:53.845031 systemd-journald[1110]: Received client request to flush runtime journal. May 9 00:06:53.845071 kernel: loop0: detected capacity change from 0 to 116808 May 9 00:06:53.816829 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:06:53.819924 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:06:53.825438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:06:53.826861 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:06:53.828616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:06:53.830034 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:06:53.831273 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:06:53.836120 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:06:53.838812 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:06:53.841728 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:06:53.847024 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:06:53.859889 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:06:53.869634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:06:53.874162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:06:53.877721 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:06:53.878336 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:06:53.883464 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:06:53.892830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:06:53.895646 kernel: loop1: detected capacity change from 0 to 201592 May 9 00:06:53.913532 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:06:53.913550 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:06:53.919819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:06:53.928870 kernel: loop2: detected capacity change from 0 to 113536 May 9 00:06:53.958627 kernel: loop3: detected capacity change from 0 to 116808 May 9 00:06:53.964616 kernel: loop4: detected capacity change from 0 to 201592 May 9 00:06:53.970640 kernel: loop5: detected capacity change from 0 to 113536 May 9 00:06:53.973656 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:06:53.974076 (sd-merge)[1181]: Merged extensions into '/usr'. May 9 00:06:53.979902 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:06:53.979918 systemd[1]: Reloading... May 9 00:06:54.049633 zram_generator::config[1207]: No configuration found. May 9 00:06:54.086123 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:06:54.146485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:06:54.183525 systemd[1]: Reloading finished in 203 ms. May 9 00:06:54.224651 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:06:54.226544 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:06:54.247814 systemd[1]: Starting ensure-sysext.service... May 9 00:06:54.257700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:06:54.266787 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... May 9 00:06:54.266801 systemd[1]: Reloading... May 9 00:06:54.275295 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:06:54.275545 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:06:54.276260 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:06:54.277703 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 9 00:06:54.277762 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 9 00:06:54.281973 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:06:54.281983 systemd-tmpfiles[1242]: Skipping /boot May 9 00:06:54.289987 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:06:54.290001 systemd-tmpfiles[1242]: Skipping /boot May 9 00:06:54.309704 zram_generator::config[1266]: No configuration found. May 9 00:06:54.400824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:06:54.436607 systemd[1]: Reloading finished in 169 ms. May 9 00:06:54.456068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:06:54.472071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:06:54.479484 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 00:06:54.481560 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:06:54.483733 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:06:54.488862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:06:54.494937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:06:54.499935 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:06:54.503241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:06:54.504291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:06:54.509550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:06:54.511649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:06:54.512714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:06:54.516177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:06:54.519890 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:06:54.521698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:06:54.521971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:06:54.524107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:06:54.524363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:06:54.525876 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:06:54.526095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:06:54.529472 systemd-udevd[1313]: Using default interface naming scheme 'v255'. May 9 00:06:54.539759 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:06:54.541481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:06:54.555045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:06:54.558506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:06:54.562907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:06:54.568841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:06:54.569675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:06:54.572751 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:06:54.574133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:06:54.576237 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:06:54.577053 augenrules[1361]: No rules May 9 00:06:54.578020 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:06:54.578177 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 00:06:54.581520 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:06:54.583014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:06:54.583132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:06:54.585310 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:06:54.585428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:06:54.586703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:06:54.586824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:06:54.590063 systemd[1]: Finished ensure-sysext.service. May 9 00:06:54.591686 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:06:54.591812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:06:54.598695 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:06:54.605162 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 00:06:54.616895 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:06:54.617660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:06:54.617724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:06:54.619968 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:06:54.623676 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:06:54.632694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1339) May 9 00:06:54.645778 systemd-resolved[1308]: Positive Trust Anchors: May 9 00:06:54.650285 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:06:54.650324 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:06:54.656330 systemd-resolved[1308]: Defaulting to hostname 'linux'. May 9 00:06:54.663276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:06:54.664571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:06:54.671694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:06:54.678780 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:06:54.711459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:06:54.712908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:06:54.714709 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:06:54.716599 systemd-networkd[1381]: lo: Link UP May 9 00:06:54.716607 systemd-networkd[1381]: lo: Gained carrier May 9 00:06:54.719187 systemd-networkd[1381]: Enumeration completed May 9 00:06:54.719353 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:06:54.720421 systemd[1]: Reached target network.target - Network. May 9 00:06:54.723735 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:06:54.723743 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:06:54.727779 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:06:54.728413 systemd-networkd[1381]: eth0: Link UP May 9 00:06:54.728416 systemd-networkd[1381]: eth0: Gained carrier May 9 00:06:54.728430 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:06:54.756670 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:06:54.758351 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. May 9 00:06:54.758932 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:06:54.758974 systemd-timesyncd[1382]: Initial clock synchronization to Fri 2025-05-09 00:06:55.045467 UTC. May 9 00:06:54.764957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:06:54.775022 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:06:54.778137 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:06:54.797632 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:06:54.800568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:06:54.840062 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:06:54.841214 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:06:54.842055 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:06:54.842897 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:06:54.843810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:06:54.844854 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:06:54.845683 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:06:54.846548 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:06:54.847468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:06:54.847503 systemd[1]: Reached target paths.target - Path Units. May 9 00:06:54.848222 systemd[1]: Reached target timers.target - Timer Units. May 9 00:06:54.849777 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:06:54.852094 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:06:54.868447 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:06:54.870621 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:06:54.872174 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:06:54.873324 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:06:54.874319 systemd[1]: Reached target basic.target - Basic System. May 9 00:06:54.875263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:06:54.875297 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:06:54.876194 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:06:54.880627 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:06:54.878115 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:06:54.882719 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:06:54.884941 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:06:54.886012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:06:54.886894 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:06:54.891124 jq[1411]: false May 9 00:06:54.891725 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:06:54.893761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:06:54.896811 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:06:54.905730 extend-filesystems[1412]: Found loop3 May 9 00:06:54.905730 extend-filesystems[1412]: Found loop4 May 9 00:06:54.905730 extend-filesystems[1412]: Found loop5 May 9 00:06:54.905730 extend-filesystems[1412]: Found vda May 9 00:06:54.906180 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:06:54.913469 extend-filesystems[1412]: Found vda1 May 9 00:06:54.913469 extend-filesystems[1412]: Found vda2 May 9 00:06:54.913469 extend-filesystems[1412]: Found vda3 May 9 00:06:54.913469 extend-filesystems[1412]: Found usr May 9 00:06:54.913469 extend-filesystems[1412]: Found vda4 May 9 00:06:54.913469 extend-filesystems[1412]: Found vda6 May 9 00:06:54.913469 extend-filesystems[1412]: Found vda7 May 9 00:06:54.913469 extend-filesystems[1412]: Found vda9 May 9 00:06:54.913469 extend-filesystems[1412]: Checking size of /dev/vda9 May 9 00:06:54.908308 dbus-daemon[1410]: [system] SELinux support is enabled May 9 00:06:54.907793 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:06:54.908173 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:06:54.910492 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:06:54.914422 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:06:54.916368 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:06:54.922164 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:06:54.925785 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:06:54.928744 jq[1429]: true May 9 00:06:54.926003 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:06:54.926255 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:06:54.928760 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:06:54.934145 extend-filesystems[1412]: Resized partition /dev/vda9 May 9 00:06:54.942642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1369) May 9 00:06:54.943043 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:06:54.943220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:06:54.948049 update_engine[1426]: I20250509 00:06:54.947898 1426 main.cc:92] Flatcar Update Engine starting May 9 00:06:54.952500 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) May 9 00:06:54.957614 update_engine[1426]: I20250509 00:06:54.957144 1426 update_check_scheduler.cc:74] Next update check in 9m4s May 9 00:06:54.960637 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:06:54.963591 jq[1437]: true May 9 00:06:54.963242 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:06:54.965653 systemd[1]: Started update-engine.service - Update Engine. May 9 00:06:54.967951 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:06:54.967978 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:06:54.969402 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:06:54.969430 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:06:54.980935 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:06:54.983613 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:06:54.986273 tar[1434]: linux-arm64/LICENSE May 9 00:06:55.000442 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) May 9 00:06:55.000868 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:06:55.000868 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:06:55.000868 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:06:55.017425 extend-filesystems[1412]: Resized filesystem in /dev/vda9 May 9 00:06:55.021176 tar[1434]: linux-arm64/helm May 9 00:06:55.002956 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:06:55.004697 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:06:55.006793 systemd-logind[1423]: New seat seat0. May 9 00:06:55.013513 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:06:55.040588 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:06:55.063209 bash[1476]: Updated "/home/core/.ssh/authorized_keys" May 9 00:06:55.066675 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:06:55.068852 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:06:55.159070 containerd[1445]: time="2025-05-09T00:06:55.158983028Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 00:06:55.189302 containerd[1445]: time="2025-05-09T00:06:55.189152011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190468509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190500827Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190517483Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190677581Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190695065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190752326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:06:55.190950 containerd[1445]: time="2025-05-09T00:06:55.190764052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.191117 containerd[1445]: time="2025-05-09T00:06:55.191055990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:06:55.191117 containerd[1445]: time="2025-05-09T00:06:55.191075132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.191117 containerd[1445]: time="2025-05-09T00:06:55.191088432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:06:55.191117 containerd[1445]: time="2025-05-09T00:06:55.191097506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.191213 containerd[1445]: time="2025-05-09T00:06:55.191192636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.191406 containerd[1445]: time="2025-05-09T00:06:55.191386751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:06:55.191567 containerd[1445]: time="2025-05-09T00:06:55.191485237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:06:55.191596 containerd[1445]: time="2025-05-09T00:06:55.191569222Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:06:55.191684 containerd[1445]: time="2025-05-09T00:06:55.191666922Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:06:55.191771 containerd[1445]: time="2025-05-09T00:06:55.191714570Z" level=info msg="metadata content store policy set" policy=shared May 9 00:06:55.196148 containerd[1445]: time="2025-05-09T00:06:55.196081709Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:06:55.196148 containerd[1445]: time="2025-05-09T00:06:55.196129523Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:06:55.196148 containerd[1445]: time="2025-05-09T00:06:55.196143859Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:06:55.196257 containerd[1445]: time="2025-05-09T00:06:55.196159521Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:06:55.196257 containerd[1445]: time="2025-05-09T00:06:55.196180983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:06:55.196355 containerd[1445]: time="2025-05-09T00:06:55.196318085Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:06:55.196571 containerd[1445]: time="2025-05-09T00:06:55.196541244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:06:55.196665 containerd[1445]: time="2025-05-09T00:06:55.196644372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:06:55.196698 containerd[1445]: time="2025-05-09T00:06:55.196664674Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:06:55.196698 containerd[1445]: time="2025-05-09T00:06:55.196690487Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:06:55.196732 containerd[1445]: time="2025-05-09T00:06:55.196704035Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196732 containerd[1445]: time="2025-05-09T00:06:55.196717004Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196732 containerd[1445]: time="2025-05-09T00:06:55.196729434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196792 containerd[1445]: time="2025-05-09T00:06:55.196742237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196792 containerd[1445]: time="2025-05-09T00:06:55.196756241Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196792 containerd[1445]: time="2025-05-09T00:06:55.196769748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196792 containerd[1445]: time="2025-05-09T00:06:55.196781847Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196859 containerd[1445]: time="2025-05-09T00:06:55.196818101Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:06:55.196859 containerd[1445]: time="2025-05-09T00:06:55.196838610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196859 containerd[1445]: time="2025-05-09T00:06:55.196851289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196913 containerd[1445]: time="2025-05-09T00:06:55.196862517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196913 containerd[1445]: time="2025-05-09T00:06:55.196875320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196913 containerd[1445]: time="2025-05-09T00:06:55.196886797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196913 containerd[1445]: time="2025-05-09T00:06:55.196899973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196913 containerd[1445]: time="2025-05-09T00:06:55.196910828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196922761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196935440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196949154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196962703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196974677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:06:55.196996 containerd[1445]: time="2025-05-09T00:06:55.196986485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:06:55.197096 containerd[1445]: time="2025-05-09T00:06:55.196999868Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:06:55.197096 containerd[1445]: time="2025-05-09T00:06:55.197019093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:06:55.197096 containerd[1445]: time="2025-05-09T00:06:55.197031565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:06:55.197096 containerd[1445]: time="2025-05-09T00:06:55.197041053Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:06:55.197897 containerd[1445]: time="2025-05-09T00:06:55.197870089Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:06:55.197978 containerd[1445]: time="2025-05-09T00:06:55.197906095Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:06:55.198003 containerd[1445]: time="2025-05-09T00:06:55.197979473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:06:55.198003 containerd[1445]: time="2025-05-09T00:06:55.197993187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:06:55.198055 containerd[1445]: time="2025-05-09T00:06:55.198002012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:06:55.198055 containerd[1445]: time="2025-05-09T00:06:55.198015561Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:06:55.198055 containerd[1445]: time="2025-05-09T00:06:55.198025422Z" level=info msg="NRI interface is disabled by configuration." May 9 00:06:55.198055 containerd[1445]: time="2025-05-09T00:06:55.198037728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:06:55.198410 containerd[1445]: time="2025-05-09T00:06:55.198366707Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:06:55.198528 containerd[1445]: time="2025-05-09T00:06:55.198416800Z" level=info msg="Connect containerd service" May 9 00:06:55.198528 containerd[1445]: time="2025-05-09T00:06:55.198447875Z" level=info msg="using legacy CRI server" May 9 00:06:55.198528 containerd[1445]: time="2025-05-09T00:06:55.198455125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:06:55.198714 containerd[1445]: time="2025-05-09T00:06:55.198697178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:06:55.199469 containerd[1445]: time="2025-05-09T00:06:55.199410740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:06:55.199641 containerd[1445]: time="2025-05-09T00:06:55.199596982Z" level=info msg="Start subscribing containerd event" May 9 00:06:55.199669 containerd[1445]: time="2025-05-09T00:06:55.199650224Z" level=info msg="Start recovering state" May 9 00:06:55.200113 containerd[1445]: time="2025-05-09T00:06:55.199707029Z" level=info msg="Start event monitor" May 9 00:06:55.200113 containerd[1445]: time="2025-05-09T00:06:55.199719997Z" level=info msg="Start snapshots syncer" May 9 00:06:55.200113 containerd[1445]: time="2025-05-09T00:06:55.199728657Z" level=info msg="Start cni network conf syncer for default" May 9 00:06:55.200113 containerd[1445]: time="2025-05-09T00:06:55.199736073Z" level=info msg="Start streaming server" May 9 00:06:55.200574 containerd[1445]: time="2025-05-09T00:06:55.200554710Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:06:55.200624 containerd[1445]: time="2025-05-09T00:06:55.200611556Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:06:55.200743 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:06:55.203809 containerd[1445]: time="2025-05-09T00:06:55.203694060Z" level=info msg="containerd successfully booted in 0.047049s" May 9 00:06:55.383290 tar[1434]: linux-arm64/README.md May 9 00:06:55.400401 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:06:55.788571 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:06:55.807492 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:06:55.824909 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:06:55.830235 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:06:55.830434 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:06:55.832787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:06:55.844157 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:06:55.846570 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:06:55.848480 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 00:06:55.849556 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:06:56.757648 systemd-networkd[1381]: eth0: Gained IPv6LL May 9 00:06:56.760269 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:06:56.762014 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:06:56.774929 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:06:56.777127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:06:56.779012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:06:56.799024 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:06:56.800993 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:06:56.801215 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:06:56.803512 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:06:57.323245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:06:57.324967 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:06:57.325893 systemd[1]: Startup finished in 518ms (kernel) + 4.500s (initrd) + 4.162s (userspace) = 9.181s. May 9 00:06:57.326688 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:06:57.748818 kubelet[1522]: E0509 00:06:57.748704 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:06:57.751471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:06:57.751617 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:07:01.859239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:07:01.860341 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:53338.service - OpenSSH per-connection server daemon (10.0.0.1:53338). May 9 00:07:01.919033 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 53338 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:01.920597 sshd-session[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:01.928780 systemd-logind[1423]: New session 1 of user core. May 9 00:07:01.929745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:07:01.942952 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:07:01.954088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:07:01.956933 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:07:01.963226 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:07:02.044531 systemd[1540]: Queued start job for default target default.target. May 9 00:07:02.056548 systemd[1540]: Created slice app.slice - User Application Slice. May 9 00:07:02.056579 systemd[1540]: Reached target paths.target - Paths. May 9 00:07:02.056592 systemd[1540]: Reached target timers.target - Timers. May 9 00:07:02.057837 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:07:02.067285 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:07:02.067349 systemd[1540]: Reached target sockets.target - Sockets. May 9 00:07:02.067361 systemd[1540]: Reached target basic.target - Basic System. May 9 00:07:02.067396 systemd[1540]: Reached target default.target - Main User Target. May 9 00:07:02.067421 systemd[1540]: Startup finished in 98ms. May 9 00:07:02.067730 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:07:02.069003 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:07:02.130715 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:53342.service - OpenSSH per-connection server daemon (10.0.0.1:53342). May 9 00:07:02.174972 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 53342 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:02.176202 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:02.180047 systemd-logind[1423]: New session 2 of user core. May 9 00:07:02.189773 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:07:02.241108 sshd[1553]: Connection closed by 10.0.0.1 port 53342 May 9 00:07:02.241518 sshd-session[1551]: pam_unix(sshd:session): session closed for user core May 9 00:07:02.251038 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:53342.service: Deactivated successfully. May 9 00:07:02.252417 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:07:02.253695 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. May 9 00:07:02.254756 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:53354.service - OpenSSH per-connection server daemon (10.0.0.1:53354). May 9 00:07:02.255976 systemd-logind[1423]: Removed session 2. May 9 00:07:02.298519 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 53354 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:02.299886 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:02.304108 systemd-logind[1423]: New session 3 of user core. May 9 00:07:02.313766 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:07:02.362379 sshd[1560]: Connection closed by 10.0.0.1 port 53354 May 9 00:07:02.362797 sshd-session[1558]: pam_unix(sshd:session): session closed for user core May 9 00:07:02.376116 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:53354.service: Deactivated successfully. May 9 00:07:02.377512 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:07:02.379796 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. May 9 00:07:02.380944 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:52858.service - OpenSSH per-connection server daemon (10.0.0.1:52858). May 9 00:07:02.381659 systemd-logind[1423]: Removed session 3. May 9 00:07:02.425005 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52858 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:02.426167 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:02.430401 systemd-logind[1423]: New session 4 of user core. May 9 00:07:02.440765 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:07:02.493606 sshd[1567]: Connection closed by 10.0.0.1 port 52858 May 9 00:07:02.494322 sshd-session[1565]: pam_unix(sshd:session): session closed for user core May 9 00:07:02.502259 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:52858.service: Deactivated successfully. May 9 00:07:02.505168 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:07:02.507143 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. May 9 00:07:02.508491 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:52872.service - OpenSSH per-connection server daemon (10.0.0.1:52872). May 9 00:07:02.511874 systemd-logind[1423]: Removed session 4. May 9 00:07:02.555059 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 52872 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:02.556346 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:02.560685 systemd-logind[1423]: New session 5 of user core. May 9 00:07:02.566778 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:07:02.632273 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:07:02.632575 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:07:02.963840 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:07:02.963988 (dockerd)[1596]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:07:03.238388 dockerd[1596]: time="2025-05-09T00:07:03.238254221Z" level=info msg="Starting up" May 9 00:07:03.421383 dockerd[1596]: time="2025-05-09T00:07:03.421109947Z" level=info msg="Loading containers: start." May 9 00:07:03.588495 kernel: Initializing XFRM netlink socket May 9 00:07:03.672932 systemd-networkd[1381]: docker0: Link UP May 9 00:07:03.718959 dockerd[1596]: time="2025-05-09T00:07:03.718892115Z" level=info msg="Loading containers: done." May 9 00:07:03.731181 dockerd[1596]: time="2025-05-09T00:07:03.731116827Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:07:03.731329 dockerd[1596]: time="2025-05-09T00:07:03.731229234Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 00:07:03.731355 dockerd[1596]: time="2025-05-09T00:07:03.731330424Z" level=info msg="Daemon has completed initialization" May 9 00:07:03.762714 dockerd[1596]: time="2025-05-09T00:07:03.762543544Z" level=info msg="API listen on /run/docker.sock" May 9 00:07:03.762770 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:07:04.417887 containerd[1445]: time="2025-05-09T00:07:04.417669475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 00:07:05.062772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766126192.mount: Deactivated successfully. May 9 00:07:06.561773 containerd[1445]: time="2025-05-09T00:07:06.560651407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:06.562247 containerd[1445]: time="2025-05-09T00:07:06.562221652Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 9 00:07:06.563078 containerd[1445]: time="2025-05-09T00:07:06.563055069Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:06.566093 containerd[1445]: time="2025-05-09T00:07:06.566063237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:06.570690 containerd[1445]: time="2025-05-09T00:07:06.570653626Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.152939987s" May 9 00:07:06.570814 containerd[1445]: time="2025-05-09T00:07:06.570797241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 9 00:07:06.571826 containerd[1445]: time="2025-05-09T00:07:06.571783791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 00:07:08.002063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:07:08.007866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:08.115318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:08.121342 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:07:08.163279 kubelet[1860]: E0509 00:07:08.163207 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:07:08.168357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:07:08.168525 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:07:08.361682 containerd[1445]: time="2025-05-09T00:07:08.361557628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:08.363017 containerd[1445]: time="2025-05-09T00:07:08.362280564Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 9 00:07:08.363159 containerd[1445]: time="2025-05-09T00:07:08.363133999Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:08.366121 containerd[1445]: time="2025-05-09T00:07:08.366067648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:08.367745 containerd[1445]: time="2025-05-09T00:07:08.367655934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.795839452s" May 9 00:07:08.367745 containerd[1445]: time="2025-05-09T00:07:08.367691276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 9 00:07:08.368291 containerd[1445]: time="2025-05-09T00:07:08.368261252Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 00:07:09.818030 containerd[1445]: time="2025-05-09T00:07:09.817582874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:09.819820 containerd[1445]: time="2025-05-09T00:07:09.819772948Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 9 00:07:09.821366 containerd[1445]: time="2025-05-09T00:07:09.821315866Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:09.824425 containerd[1445]: time="2025-05-09T00:07:09.824374955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:09.826351 containerd[1445]: time="2025-05-09T00:07:09.826277248Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.457981868s" May 9 00:07:09.826351 containerd[1445]: time="2025-05-09T00:07:09.826308862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 9 00:07:09.826923 containerd[1445]: time="2025-05-09T00:07:09.826771444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 00:07:11.014371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610448428.mount: Deactivated successfully. May 9 00:07:11.224952 containerd[1445]: time="2025-05-09T00:07:11.224902045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:11.225395 containerd[1445]: time="2025-05-09T00:07:11.225335351Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 9 00:07:11.226230 containerd[1445]: time="2025-05-09T00:07:11.226184929Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:11.227964 containerd[1445]: time="2025-05-09T00:07:11.227935142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:11.228770 containerd[1445]: time="2025-05-09T00:07:11.228733224Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.401933958s" May 9 00:07:11.228770 containerd[1445]: time="2025-05-09T00:07:11.228770059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 00:07:11.229359 containerd[1445]: time="2025-05-09T00:07:11.229292460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 00:07:11.755139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987820807.mount: Deactivated successfully. May 9 00:07:12.911368 containerd[1445]: time="2025-05-09T00:07:12.911314507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:12.911953 containerd[1445]: time="2025-05-09T00:07:12.911909542Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 9 00:07:12.912735 containerd[1445]: time="2025-05-09T00:07:12.912706842Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:12.915840 containerd[1445]: time="2025-05-09T00:07:12.915803382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:12.917608 containerd[1445]: time="2025-05-09T00:07:12.917567246Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.688240045s" May 9 00:07:12.917653 containerd[1445]: time="2025-05-09T00:07:12.917622289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 9 00:07:12.918658 containerd[1445]: time="2025-05-09T00:07:12.918624787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:07:13.394327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043348163.mount: Deactivated successfully. May 9 00:07:13.401523 containerd[1445]: time="2025-05-09T00:07:13.401456244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:13.402552 containerd[1445]: time="2025-05-09T00:07:13.402492590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 00:07:13.403359 containerd[1445]: time="2025-05-09T00:07:13.403330174Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:13.405390 containerd[1445]: time="2025-05-09T00:07:13.405352743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:13.406353 containerd[1445]: time="2025-05-09T00:07:13.406311559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 487.65004ms" May 9 00:07:13.406353 containerd[1445]: time="2025-05-09T00:07:13.406345187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 00:07:13.406975 containerd[1445]: time="2025-05-09T00:07:13.406941111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 00:07:13.968196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408972914.mount: Deactivated successfully. May 9 00:07:17.004483 containerd[1445]: time="2025-05-09T00:07:17.004409760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:17.004909 containerd[1445]: time="2025-05-09T00:07:17.004756978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 9 00:07:17.005968 containerd[1445]: time="2025-05-09T00:07:17.005927155Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:17.009831 containerd[1445]: time="2025-05-09T00:07:17.009780858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:17.010742 containerd[1445]: time="2025-05-09T00:07:17.010696833Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.603721584s" May 9 00:07:17.010783 containerd[1445]: time="2025-05-09T00:07:17.010741197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 9 00:07:18.200420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:07:18.209859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:18.306664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:18.310863 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:07:18.347479 kubelet[2022]: E0509 00:07:18.347427 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:07:18.350308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:07:18.350471 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:07:22.230424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:22.240975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:22.270900 systemd[1]: Reloading requested from client PID 2037 ('systemctl') (unit session-5.scope)... May 9 00:07:22.270918 systemd[1]: Reloading... May 9 00:07:22.337987 zram_generator::config[2076]: No configuration found. May 9 00:07:22.484720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:07:22.537964 systemd[1]: Reloading finished in 266 ms. May 9 00:07:22.587032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:22.589372 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:07:22.589618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:22.591109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:22.695984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:22.700090 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:07:22.738133 kubelet[2123]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:07:22.738133 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:07:22.738133 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:07:22.738133 kubelet[2123]: I0509 00:07:22.735855 2123 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:07:23.285783 kubelet[2123]: I0509 00:07:23.285721 2123 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:07:23.285967 kubelet[2123]: I0509 00:07:23.285941 2123 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:07:23.286312 kubelet[2123]: I0509 00:07:23.286296 2123 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:07:23.336581 kubelet[2123]: E0509 00:07:23.336525 2123 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:23.337972 kubelet[2123]: I0509 00:07:23.337942 2123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:07:23.346178 kubelet[2123]: E0509 00:07:23.346137 2123 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:07:23.346178 kubelet[2123]: I0509 00:07:23.346175 2123 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:07:23.348925 kubelet[2123]: I0509 00:07:23.348898 2123 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:07:23.350318 kubelet[2123]: I0509 00:07:23.350262 2123 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:07:23.350513 kubelet[2123]: I0509 00:07:23.350323 2123 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:07:23.350611 kubelet[2123]: I0509 00:07:23.350587 2123 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:07:23.350641 kubelet[2123]: I0509 00:07:23.350614 2123 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:07:23.350851 kubelet[2123]: I0509 00:07:23.350834 2123 state_mem.go:36] "Initialized new in-memory state store" May 9 00:07:23.355085 kubelet[2123]: I0509 00:07:23.355047 2123 kubelet.go:446] "Attempting to sync node with API server" May 9 00:07:23.355085 kubelet[2123]: I0509 00:07:23.355081 2123 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:07:23.355177 kubelet[2123]: I0509 00:07:23.355107 2123 kubelet.go:352] "Adding apiserver pod source" May 9 00:07:23.355177 kubelet[2123]: I0509 00:07:23.355125 2123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:07:23.362269 kubelet[2123]: I0509 00:07:23.360681 2123 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:07:23.362269 kubelet[2123]: W0509 00:07:23.360918 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:23.362269 kubelet[2123]: E0509 00:07:23.360980 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:23.362269 kubelet[2123]: W0509 00:07:23.361058 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:23.362269 kubelet[2123]: E0509 00:07:23.361083 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:23.362269 kubelet[2123]: I0509 00:07:23.361494 2123 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:07:23.362269 kubelet[2123]: W0509 00:07:23.361646 2123 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:07:23.364187 kubelet[2123]: I0509 00:07:23.363885 2123 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:07:23.364187 kubelet[2123]: I0509 00:07:23.363948 2123 server.go:1287] "Started kubelet" May 9 00:07:23.364891 kubelet[2123]: I0509 00:07:23.364546 2123 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:07:23.366279 kubelet[2123]: I0509 00:07:23.366173 2123 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:07:23.366625 kubelet[2123]: I0509 00:07:23.366588 2123 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:07:23.368778 kubelet[2123]: I0509 00:07:23.368733 2123 server.go:490] "Adding debug handlers to kubelet server" May 9 00:07:23.368944 kubelet[2123]: I0509 00:07:23.368909 2123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:07:23.369951 kubelet[2123]: I0509 00:07:23.368742 2123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:07:23.371514 kubelet[2123]: E0509 00:07:23.371484 2123 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:07:23.371514 kubelet[2123]: I0509 00:07:23.371521 2123 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:07:23.371959 kubelet[2123]: I0509 00:07:23.371731 2123 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:07:23.372082 kubelet[2123]: I0509 00:07:23.372033 2123 reconciler.go:26] "Reconciler: start to sync state" May 9 00:07:23.372503 kubelet[2123]: W0509 00:07:23.372455 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:23.372566 kubelet[2123]: E0509 00:07:23.372510 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:23.372566 kubelet[2123]: E0509 00:07:23.372530 2123 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:07:23.373009 kubelet[2123]: E0509 00:07:23.372719 2123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db32ea26363b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:07:23.36391058 +0000 UTC m=+0.660745327,LastTimestamp:2025-05-09 00:07:23.36391058 +0000 UTC m=+0.660745327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:07:23.373716 kubelet[2123]: E0509 00:07:23.373130 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" May 9 00:07:23.373716 kubelet[2123]: I0509 00:07:23.373267 2123 factory.go:221] Registration of the systemd container factory successfully May 9 00:07:23.373716 kubelet[2123]: I0509 00:07:23.373404 2123 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:07:23.374642 kubelet[2123]: I0509 00:07:23.374612 2123 factory.go:221] Registration of the containerd container factory successfully May 9 00:07:23.387974 kubelet[2123]: I0509 00:07:23.387903 2123 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:07:23.389260 kubelet[2123]: I0509 00:07:23.389228 2123 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:07:23.389260 kubelet[2123]: I0509 00:07:23.389256 2123 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:07:23.389401 kubelet[2123]: I0509 00:07:23.389283 2123 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:07:23.389401 kubelet[2123]: I0509 00:07:23.389290 2123 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:07:23.389401 kubelet[2123]: E0509 00:07:23.389342 2123 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:07:23.390001 kubelet[2123]: W0509 00:07:23.389930 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:23.390001 kubelet[2123]: E0509 00:07:23.389985 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:23.390922 kubelet[2123]: I0509 00:07:23.390899 2123 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:07:23.391354 kubelet[2123]: I0509 00:07:23.391065 2123 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:07:23.391354 kubelet[2123]: I0509 00:07:23.391091 2123 state_mem.go:36] "Initialized new in-memory state store" May 9 00:07:23.472203 kubelet[2123]: E0509 00:07:23.472161 2123 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:07:23.484772 kubelet[2123]: I0509 00:07:23.484708 2123 policy_none.go:49] "None policy: Start" May 9 00:07:23.484772 kubelet[2123]: I0509 00:07:23.484769 2123 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:07:23.484896 kubelet[2123]: I0509 00:07:23.484791 2123 state_mem.go:35] "Initializing new in-memory state store" May 9 00:07:23.489642 kubelet[2123]: E0509 00:07:23.489588 2123 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:07:23.489967 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:07:23.503750 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:07:23.506737 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:07:23.521645 kubelet[2123]: I0509 00:07:23.521470 2123 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:07:23.521760 kubelet[2123]: I0509 00:07:23.521726 2123 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:07:23.521788 kubelet[2123]: I0509 00:07:23.521738 2123 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:07:23.522453 kubelet[2123]: I0509 00:07:23.522424 2123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:07:23.524121 kubelet[2123]: E0509 00:07:23.524080 2123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:07:23.524235 kubelet[2123]: E0509 00:07:23.524139 2123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:07:23.574905 kubelet[2123]: E0509 00:07:23.574011 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" May 9 00:07:23.624185 kubelet[2123]: I0509 00:07:23.624139 2123 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:23.624693 kubelet[2123]: E0509 00:07:23.624660 2123 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 9 00:07:23.700392 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 9 00:07:23.714442 kubelet[2123]: E0509 00:07:23.714388 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:23.717821 systemd[1]: Created slice kubepods-burstable-poda8e0bd089d2186a0640a50b9c05a6a41.slice - libcontainer container kubepods-burstable-poda8e0bd089d2186a0640a50b9c05a6a41.slice. May 9 00:07:23.727916 kubelet[2123]: E0509 00:07:23.727870 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:23.730499 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 9 00:07:23.732518 kubelet[2123]: E0509 00:07:23.732465 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:23.774006 kubelet[2123]: I0509 00:07:23.773956 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:23.774006 kubelet[2123]: I0509 00:07:23.773997 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:23.774388 kubelet[2123]: I0509 00:07:23.774021 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:23.774388 kubelet[2123]: I0509 00:07:23.774082 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:23.774388 kubelet[2123]: I0509 00:07:23.774143 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:23.774388 kubelet[2123]: I0509 00:07:23.774164 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:23.774388 kubelet[2123]: I0509 00:07:23.774181 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 00:07:23.774501 kubelet[2123]: I0509 00:07:23.774196 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:23.774501 kubelet[2123]: I0509 00:07:23.774239 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:23.826080 kubelet[2123]: I0509 00:07:23.825983 2123 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:23.826385 kubelet[2123]: E0509 00:07:23.826351 2123 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 9 00:07:23.975159 kubelet[2123]: E0509 00:07:23.975098 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" May 9 00:07:24.015488 kubelet[2123]: E0509 00:07:24.015450 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.018042 containerd[1445]: time="2025-05-09T00:07:24.017927840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 9 00:07:24.028880 kubelet[2123]: E0509 00:07:24.028836 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.029392 containerd[1445]: time="2025-05-09T00:07:24.029355673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e0bd089d2186a0640a50b9c05a6a41,Namespace:kube-system,Attempt:0,}" May 9 00:07:24.033897 kubelet[2123]: E0509 00:07:24.033679 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.034235 containerd[1445]: time="2025-05-09T00:07:24.034184070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 9 00:07:24.228092 kubelet[2123]: I0509 00:07:24.227931 2123 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:24.228453 kubelet[2123]: E0509 00:07:24.228398 2123 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 9 00:07:24.302417 kubelet[2123]: W0509 00:07:24.302371 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:24.304428 kubelet[2123]: E0509 00:07:24.302419 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:24.312064 kubelet[2123]: W0509 00:07:24.312000 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:24.312212 kubelet[2123]: E0509 00:07:24.312070 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:24.541019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312332486.mount: Deactivated successfully. May 9 00:07:24.546266 containerd[1445]: time="2025-05-09T00:07:24.546211603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:07:24.548726 containerd[1445]: time="2025-05-09T00:07:24.548666072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 00:07:24.549511 containerd[1445]: time="2025-05-09T00:07:24.549479038Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:07:24.550512 containerd[1445]: time="2025-05-09T00:07:24.550479503Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:07:24.551153 containerd[1445]: time="2025-05-09T00:07:24.551106170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:07:24.551964 containerd[1445]: time="2025-05-09T00:07:24.551926341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:07:24.552799 containerd[1445]: time="2025-05-09T00:07:24.552609410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:07:24.555019 containerd[1445]: time="2025-05-09T00:07:24.554976773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:07:24.555917 containerd[1445]: time="2025-05-09T00:07:24.555884329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.87659ms" May 9 00:07:24.561031 containerd[1445]: time="2025-05-09T00:07:24.560833136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.393681ms" May 9 00:07:24.564208 containerd[1445]: time="2025-05-09T00:07:24.564156572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.882916ms" May 9 00:07:24.693174 containerd[1445]: time="2025-05-09T00:07:24.693061925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:24.693174 containerd[1445]: time="2025-05-09T00:07:24.693132818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:24.693174 containerd[1445]: time="2025-05-09T00:07:24.693145147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.693767 containerd[1445]: time="2025-05-09T00:07:24.693519866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.695993 containerd[1445]: time="2025-05-09T00:07:24.695508387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:24.695993 containerd[1445]: time="2025-05-09T00:07:24.695956681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:24.695993 containerd[1445]: time="2025-05-09T00:07:24.695970051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.696116 containerd[1445]: time="2025-05-09T00:07:24.696051792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.698116 containerd[1445]: time="2025-05-09T00:07:24.697645620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:24.698116 containerd[1445]: time="2025-05-09T00:07:24.698071657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:24.698116 containerd[1445]: time="2025-05-09T00:07:24.698086188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.698491 containerd[1445]: time="2025-05-09T00:07:24.698425320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:24.723875 systemd[1]: Started cri-containerd-3258b1b375b4ad97acb96b3fa99486c4e16f47a05c0ef3082e2ad6f1fe64a1f3.scope - libcontainer container 3258b1b375b4ad97acb96b3fa99486c4e16f47a05c0ef3082e2ad6f1fe64a1f3. May 9 00:07:24.725314 systemd[1]: Started cri-containerd-ce577f7ddb643080dee34cdd5ca6ecec8f713a7227410e96f7dab9c5ba82a940.scope - libcontainer container ce577f7ddb643080dee34cdd5ca6ecec8f713a7227410e96f7dab9c5ba82a940. May 9 00:07:24.729070 systemd[1]: Started cri-containerd-8b4943a5e1f2b8c4563a392abd2170fea8e0d4237d18ead19697640b588a1e59.scope - libcontainer container 8b4943a5e1f2b8c4563a392abd2170fea8e0d4237d18ead19697640b588a1e59. May 9 00:07:24.762718 containerd[1445]: time="2025-05-09T00:07:24.762656211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"3258b1b375b4ad97acb96b3fa99486c4e16f47a05c0ef3082e2ad6f1fe64a1f3\"" May 9 00:07:24.764078 kubelet[2123]: E0509 00:07:24.763974 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.769120 containerd[1445]: time="2025-05-09T00:07:24.767230019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e0bd089d2186a0640a50b9c05a6a41,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b4943a5e1f2b8c4563a392abd2170fea8e0d4237d18ead19697640b588a1e59\"" May 9 00:07:24.769120 containerd[1445]: time="2025-05-09T00:07:24.768277439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce577f7ddb643080dee34cdd5ca6ecec8f713a7227410e96f7dab9c5ba82a940\"" May 9 00:07:24.769274 kubelet[2123]: E0509 00:07:24.769019 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.769801 kubelet[2123]: E0509 00:07:24.769760 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:24.770323 containerd[1445]: time="2025-05-09T00:07:24.770176734Z" level=info msg="CreateContainer within sandbox \"3258b1b375b4ad97acb96b3fa99486c4e16f47a05c0ef3082e2ad6f1fe64a1f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:07:24.772318 containerd[1445]: time="2025-05-09T00:07:24.772242393Z" level=info msg="CreateContainer within sandbox \"ce577f7ddb643080dee34cdd5ca6ecec8f713a7227410e96f7dab9c5ba82a940\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:07:24.772540 containerd[1445]: time="2025-05-09T00:07:24.772244114Z" level=info msg="CreateContainer within sandbox \"8b4943a5e1f2b8c4563a392abd2170fea8e0d4237d18ead19697640b588a1e59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:07:24.776032 kubelet[2123]: E0509 00:07:24.775989 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" May 9 00:07:24.808079 containerd[1445]: time="2025-05-09T00:07:24.807838752Z" level=info msg="CreateContainer within sandbox \"3258b1b375b4ad97acb96b3fa99486c4e16f47a05c0ef3082e2ad6f1fe64a1f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fdd7aae2b9f33075611bb3e8d40fb0fd3c15d52bc197dfb4960e7c54f059dff7\"" May 9 00:07:24.808707 containerd[1445]: time="2025-05-09T00:07:24.808677056Z" level=info msg="CreateContainer within sandbox \"8b4943a5e1f2b8c4563a392abd2170fea8e0d4237d18ead19697640b588a1e59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"946dda674ee9e5b831ccc3142e102938e039f4ea6c1e6a0405fb4e8846dcfd3d\"" May 9 00:07:24.810022 containerd[1445]: time="2025-05-09T00:07:24.808749670Z" level=info msg="StartContainer for \"fdd7aae2b9f33075611bb3e8d40fb0fd3c15d52bc197dfb4960e7c54f059dff7\"" May 9 00:07:24.810022 containerd[1445]: time="2025-05-09T00:07:24.809759943Z" level=info msg="CreateContainer within sandbox \"ce577f7ddb643080dee34cdd5ca6ecec8f713a7227410e96f7dab9c5ba82a940\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29cd22aa9d07a597c36cd68b3889fd046bfc3722712297ee71faa8f99b981a71\"" May 9 00:07:24.810273 containerd[1445]: time="2025-05-09T00:07:24.810249788Z" level=info msg="StartContainer for \"29cd22aa9d07a597c36cd68b3889fd046bfc3722712297ee71faa8f99b981a71\"" May 9 00:07:24.810484 containerd[1445]: time="2025-05-09T00:07:24.810447495Z" level=info msg="StartContainer for \"946dda674ee9e5b831ccc3142e102938e039f4ea6c1e6a0405fb4e8846dcfd3d\"" May 9 00:07:24.831036 kubelet[2123]: W0509 00:07:24.830975 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:24.831173 kubelet[2123]: E0509 00:07:24.831046 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:24.845867 systemd[1]: Started cri-containerd-946dda674ee9e5b831ccc3142e102938e039f4ea6c1e6a0405fb4e8846dcfd3d.scope - libcontainer container 946dda674ee9e5b831ccc3142e102938e039f4ea6c1e6a0405fb4e8846dcfd3d. May 9 00:07:24.847132 systemd[1]: Started cri-containerd-fdd7aae2b9f33075611bb3e8d40fb0fd3c15d52bc197dfb4960e7c54f059dff7.scope - libcontainer container fdd7aae2b9f33075611bb3e8d40fb0fd3c15d52bc197dfb4960e7c54f059dff7. May 9 00:07:24.852712 systemd[1]: Started cri-containerd-29cd22aa9d07a597c36cd68b3889fd046bfc3722712297ee71faa8f99b981a71.scope - libcontainer container 29cd22aa9d07a597c36cd68b3889fd046bfc3722712297ee71faa8f99b981a71. May 9 00:07:24.859287 kubelet[2123]: W0509 00:07:24.857479 2123 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 9 00:07:24.859479 kubelet[2123]: E0509 00:07:24.859329 2123 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 9 00:07:24.884942 containerd[1445]: time="2025-05-09T00:07:24.884872101Z" level=info msg="StartContainer for \"fdd7aae2b9f33075611bb3e8d40fb0fd3c15d52bc197dfb4960e7c54f059dff7\" returns successfully" May 9 00:07:24.902330 containerd[1445]: time="2025-05-09T00:07:24.901425433Z" level=info msg="StartContainer for \"946dda674ee9e5b831ccc3142e102938e039f4ea6c1e6a0405fb4e8846dcfd3d\" returns successfully" May 9 00:07:24.938236 containerd[1445]: time="2025-05-09T00:07:24.929478372Z" level=info msg="StartContainer for \"29cd22aa9d07a597c36cd68b3889fd046bfc3722712297ee71faa8f99b981a71\" returns successfully" May 9 00:07:25.031466 kubelet[2123]: I0509 00:07:25.031015 2123 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:25.031466 kubelet[2123]: E0509 00:07:25.031392 2123 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 9 00:07:25.402629 kubelet[2123]: E0509 00:07:25.401199 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:25.402629 kubelet[2123]: E0509 00:07:25.401334 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:25.405822 kubelet[2123]: E0509 00:07:25.405792 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:25.406132 kubelet[2123]: E0509 00:07:25.406109 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:25.409490 kubelet[2123]: E0509 00:07:25.409458 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:25.409974 kubelet[2123]: E0509 00:07:25.409957 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:26.412443 kubelet[2123]: E0509 00:07:26.412397 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:26.412824 kubelet[2123]: E0509 00:07:26.412565 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:26.413515 kubelet[2123]: E0509 00:07:26.413472 2123 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:07:26.413668 kubelet[2123]: E0509 00:07:26.413633 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:26.633576 kubelet[2123]: I0509 00:07:26.632857 2123 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:26.795691 kubelet[2123]: E0509 00:07:26.794941 2123 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:07:26.849922 kubelet[2123]: I0509 00:07:26.849869 2123 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 00:07:26.873424 kubelet[2123]: I0509 00:07:26.873379 2123 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:07:26.929896 kubelet[2123]: E0509 00:07:26.929856 2123 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 9 00:07:26.929896 kubelet[2123]: I0509 00:07:26.929892 2123 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 00:07:26.932641 kubelet[2123]: E0509 00:07:26.932604 2123 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 9 00:07:26.932641 kubelet[2123]: I0509 00:07:26.932642 2123 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:07:26.934762 kubelet[2123]: E0509 00:07:26.934735 2123 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 9 00:07:27.362905 kubelet[2123]: I0509 00:07:27.362863 2123 apiserver.go:52] "Watching apiserver" May 9 00:07:27.372928 kubelet[2123]: I0509 00:07:27.372891 2123 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:07:28.318806 kubelet[2123]: I0509 00:07:28.318757 2123 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:07:28.327589 kubelet[2123]: E0509 00:07:28.327539 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:28.367613 kubelet[2123]: I0509 00:07:28.367566 2123 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:07:28.373888 kubelet[2123]: E0509 00:07:28.373845 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:28.414947 kubelet[2123]: E0509 00:07:28.414909 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:28.415089 kubelet[2123]: E0509 00:07:28.415007 2123 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:28.669301 systemd[1]: Reloading requested from client PID 2402 ('systemctl') (unit session-5.scope)... May 9 00:07:28.669321 systemd[1]: Reloading... May 9 00:07:28.745634 zram_generator::config[2444]: No configuration found. May 9 00:07:28.833447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:07:28.900642 systemd[1]: Reloading finished in 230 ms. May 9 00:07:28.934141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:28.947863 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:07:28.948124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:28.948211 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 126.0M memory peak, 0B memory swap peak. May 9 00:07:28.955993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:07:29.061086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:07:29.069053 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:07:29.119199 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:07:29.119199 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:07:29.119199 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:07:29.119559 kubelet[2483]: I0509 00:07:29.119324 2483 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:07:29.125512 kubelet[2483]: I0509 00:07:29.125470 2483 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:07:29.125512 kubelet[2483]: I0509 00:07:29.125505 2483 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:07:29.125846 kubelet[2483]: I0509 00:07:29.125817 2483 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:07:29.127291 kubelet[2483]: I0509 00:07:29.127264 2483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:07:29.130004 kubelet[2483]: I0509 00:07:29.129969 2483 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:07:29.137063 kubelet[2483]: E0509 00:07:29.137003 2483 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:07:29.137063 kubelet[2483]: I0509 00:07:29.137041 2483 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:07:29.139628 kubelet[2483]: I0509 00:07:29.139593 2483 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:07:29.139834 kubelet[2483]: I0509 00:07:29.139804 2483 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:07:29.140277 kubelet[2483]: I0509 00:07:29.139836 2483 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:07:29.140419 kubelet[2483]: I0509 00:07:29.140282 2483 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:07:29.140419 kubelet[2483]: I0509 00:07:29.140294 2483 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:07:29.140419 kubelet[2483]: I0509 00:07:29.140352 2483 state_mem.go:36] "Initialized new in-memory state store" May 9 00:07:29.140515 kubelet[2483]: I0509 00:07:29.140498 2483 kubelet.go:446] "Attempting to sync node with API server" May 9 00:07:29.140545 kubelet[2483]: I0509 00:07:29.140515 2483 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:07:29.140545 kubelet[2483]: I0509 00:07:29.140535 2483 kubelet.go:352] "Adding apiserver pod source" May 9 00:07:29.140545 kubelet[2483]: I0509 00:07:29.140544 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:07:29.142021 kubelet[2483]: I0509 00:07:29.141996 2483 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 00:07:29.142473 kubelet[2483]: I0509 00:07:29.142453 2483 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:07:29.143939 kubelet[2483]: I0509 00:07:29.143900 2483 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:07:29.144057 kubelet[2483]: I0509 00:07:29.143956 2483 server.go:1287] "Started kubelet" May 9 00:07:29.145494 kubelet[2483]: I0509 00:07:29.145461 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:07:29.145799 kubelet[2483]: I0509 00:07:29.145764 2483 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:07:29.147115 kubelet[2483]: I0509 00:07:29.147077 2483 server.go:490] "Adding debug handlers to kubelet server" May 9 00:07:29.149038 kubelet[2483]: I0509 00:07:29.148228 2483 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:07:29.151510 kubelet[2483]: I0509 00:07:29.151486 2483 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:07:29.151677 kubelet[2483]: I0509 00:07:29.151064 2483 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:07:29.154252 kubelet[2483]: E0509 00:07:29.152316 2483 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:07:29.154434 kubelet[2483]: I0509 00:07:29.154414 2483 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:07:29.154796 kubelet[2483]: I0509 00:07:29.154771 2483 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:07:29.155022 kubelet[2483]: I0509 00:07:29.155007 2483 reconciler.go:26] "Reconciler: start to sync state" May 9 00:07:29.159642 kubelet[2483]: I0509 00:07:29.159560 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:07:29.161274 kubelet[2483]: I0509 00:07:29.161225 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:07:29.161274 kubelet[2483]: I0509 00:07:29.161258 2483 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:07:29.161274 kubelet[2483]: I0509 00:07:29.161279 2483 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:07:29.161274 kubelet[2483]: I0509 00:07:29.161286 2483 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:07:29.161966 kubelet[2483]: E0509 00:07:29.161342 2483 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:07:29.166668 kubelet[2483]: I0509 00:07:29.166637 2483 factory.go:221] Registration of the containerd container factory successfully May 9 00:07:29.166668 kubelet[2483]: I0509 00:07:29.166663 2483 factory.go:221] Registration of the systemd container factory successfully May 9 00:07:29.168068 kubelet[2483]: I0509 00:07:29.168012 2483 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.215935 2483 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.215958 2483 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.215983 2483 state_mem.go:36] "Initialized new in-memory state store" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216178 2483 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216191 2483 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216211 2483 policy_none.go:49] "None policy: Start" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216220 2483 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216234 2483 state_mem.go:35] "Initializing new in-memory state store" May 9 00:07:29.217763 kubelet[2483]: I0509 00:07:29.216402 2483 state_mem.go:75] "Updated machine memory state" May 9 00:07:29.221907 kubelet[2483]: I0509 00:07:29.221877 2483 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:07:29.222308 kubelet[2483]: I0509 00:07:29.222101 2483 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:07:29.222308 kubelet[2483]: I0509 00:07:29.222122 2483 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:07:29.222415 kubelet[2483]: I0509 00:07:29.222375 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:07:29.224966 kubelet[2483]: E0509 00:07:29.224934 2483 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:07:29.262513 kubelet[2483]: I0509 00:07:29.262280 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 00:07:29.262513 kubelet[2483]: I0509 00:07:29.262361 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.262513 kubelet[2483]: I0509 00:07:29.262396 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:07:29.268728 kubelet[2483]: E0509 00:07:29.268681 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:07:29.268998 kubelet[2483]: E0509 00:07:29.268964 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.327865 kubelet[2483]: I0509 00:07:29.327827 2483 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:07:29.334461 kubelet[2483]: I0509 00:07:29.334142 2483 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 9 00:07:29.334461 kubelet[2483]: I0509 00:07:29.334231 2483 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 00:07:29.356017 kubelet[2483]: I0509 00:07:29.355925 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:29.356354 kubelet[2483]: I0509 00:07:29.356224 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:29.356354 kubelet[2483]: I0509 00:07:29.356256 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.356354 kubelet[2483]: I0509 00:07:29.356312 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.356354 kubelet[2483]: I0509 00:07:29.356334 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.356672 kubelet[2483]: I0509 00:07:29.356516 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 00:07:29.356672 kubelet[2483]: I0509 00:07:29.356543 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 9 00:07:29.356672 kubelet[2483]: I0509 00:07:29.356615 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.356672 kubelet[2483]: I0509 00:07:29.356638 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:07:29.569348 kubelet[2483]: E0509 00:07:29.569225 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:29.569997 kubelet[2483]: E0509 00:07:29.569719 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:29.569997 kubelet[2483]: E0509 00:07:29.569772 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:30.141725 kubelet[2483]: I0509 00:07:30.141649 2483 apiserver.go:52] "Watching apiserver" May 9 00:07:30.155367 kubelet[2483]: I0509 00:07:30.155330 2483 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:07:30.191657 kubelet[2483]: I0509 00:07:30.191471 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 00:07:30.191657 kubelet[2483]: E0509 00:07:30.191540 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:30.191819 kubelet[2483]: I0509 00:07:30.191707 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:07:30.201685 kubelet[2483]: E0509 00:07:30.201629 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 00:07:30.201835 kubelet[2483]: E0509 00:07:30.201808 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:30.201921 kubelet[2483]: E0509 00:07:30.201902 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:07:30.202004 kubelet[2483]: E0509 00:07:30.201990 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:30.221032 kubelet[2483]: I0509 00:07:30.220966 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.220949865 podStartE2EDuration="1.220949865s" podCreationTimestamp="2025-05-09 00:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:30.220859616 +0000 UTC m=+1.148327900" watchObservedRunningTime="2025-05-09 00:07:30.220949865 +0000 UTC m=+1.148418149" May 9 00:07:30.260457 kubelet[2483]: I0509 00:07:30.260397 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.260378983 podStartE2EDuration="2.260378983s" podCreationTimestamp="2025-05-09 00:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:30.252568593 +0000 UTC m=+1.180036877" watchObservedRunningTime="2025-05-09 00:07:30.260378983 +0000 UTC m=+1.187847267" May 9 00:07:30.271707 kubelet[2483]: I0509 00:07:30.271644 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.27162707 podStartE2EDuration="2.27162707s" podCreationTimestamp="2025-05-09 00:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:30.260794092 +0000 UTC m=+1.188262376" watchObservedRunningTime="2025-05-09 00:07:30.27162707 +0000 UTC m=+1.199095354" May 9 00:07:30.513462 sudo[1575]: pam_unix(sudo:session): session closed for user root May 9 00:07:30.516479 sshd[1574]: Connection closed by 10.0.0.1 port 52872 May 9 00:07:30.516268 sshd-session[1572]: pam_unix(sshd:session): session closed for user core May 9 00:07:30.520109 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:52872.service: Deactivated successfully. May 9 00:07:30.522226 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:07:30.522479 systemd[1]: session-5.scope: Consumed 6.454s CPU time, 159.4M memory peak, 0B memory swap peak. May 9 00:07:30.523239 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. May 9 00:07:30.524063 systemd-logind[1423]: Removed session 5. May 9 00:07:31.193804 kubelet[2483]: E0509 00:07:31.192737 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:31.193804 kubelet[2483]: E0509 00:07:31.193274 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:34.197323 kubelet[2483]: E0509 00:07:34.196818 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:34.265082 kubelet[2483]: E0509 00:07:34.263761 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:34.610820 kubelet[2483]: E0509 00:07:34.610590 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:35.199920 kubelet[2483]: E0509 00:07:35.199176 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:35.199920 kubelet[2483]: E0509 00:07:35.199255 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:35.199920 kubelet[2483]: E0509 00:07:35.199753 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:36.201777 kubelet[2483]: E0509 00:07:36.201734 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:36.213457 kubelet[2483]: I0509 00:07:36.213322 2483 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:07:36.214266 containerd[1445]: time="2025-05-09T00:07:36.214157138Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:07:36.214671 kubelet[2483]: I0509 00:07:36.214336 2483 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:07:37.213527 systemd[1]: Created slice kubepods-besteffort-podc68a65e2_2abc_45db_b320_76acbee81ea0.slice - libcontainer container kubepods-besteffort-podc68a65e2_2abc_45db_b320_76acbee81ea0.slice. May 9 00:07:37.237766 systemd[1]: Created slice kubepods-burstable-pod3ed8f069_da1a_4410_9aaa_88556102fdc3.slice - libcontainer container kubepods-burstable-pod3ed8f069_da1a_4410_9aaa_88556102fdc3.slice. May 9 00:07:37.402069 kubelet[2483]: I0509 00:07:37.402025 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgrg\" (UniqueName: \"kubernetes.io/projected/3ed8f069-da1a-4410-9aaa-88556102fdc3-kube-api-access-chgrg\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.402069 kubelet[2483]: I0509 00:07:37.402070 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3ed8f069-da1a-4410-9aaa-88556102fdc3-run\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.402619 kubelet[2483]: I0509 00:07:37.402088 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3ed8f069-da1a-4410-9aaa-88556102fdc3-cni-plugin\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.402619 kubelet[2483]: I0509 00:07:37.402109 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c68a65e2-2abc-45db-b320-76acbee81ea0-kube-proxy\") pod \"kube-proxy-l86d7\" (UID: \"c68a65e2-2abc-45db-b320-76acbee81ea0\") " pod="kube-system/kube-proxy-l86d7" May 9 00:07:37.402619 kubelet[2483]: I0509 00:07:37.402127 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntg6\" (UniqueName: \"kubernetes.io/projected/c68a65e2-2abc-45db-b320-76acbee81ea0-kube-api-access-7ntg6\") pod \"kube-proxy-l86d7\" (UID: \"c68a65e2-2abc-45db-b320-76acbee81ea0\") " pod="kube-system/kube-proxy-l86d7" May 9 00:07:37.402619 kubelet[2483]: I0509 00:07:37.402142 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ed8f069-da1a-4410-9aaa-88556102fdc3-xtables-lock\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.402619 kubelet[2483]: I0509 00:07:37.402211 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c68a65e2-2abc-45db-b320-76acbee81ea0-lib-modules\") pod \"kube-proxy-l86d7\" (UID: \"c68a65e2-2abc-45db-b320-76acbee81ea0\") " pod="kube-system/kube-proxy-l86d7" May 9 00:07:37.402732 kubelet[2483]: I0509 00:07:37.402253 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3ed8f069-da1a-4410-9aaa-88556102fdc3-flannel-cfg\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.402732 kubelet[2483]: I0509 00:07:37.402288 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c68a65e2-2abc-45db-b320-76acbee81ea0-xtables-lock\") pod \"kube-proxy-l86d7\" (UID: \"c68a65e2-2abc-45db-b320-76acbee81ea0\") " pod="kube-system/kube-proxy-l86d7" May 9 00:07:37.402732 kubelet[2483]: I0509 00:07:37.402306 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3ed8f069-da1a-4410-9aaa-88556102fdc3-cni\") pod \"kube-flannel-ds-2vwpp\" (UID: \"3ed8f069-da1a-4410-9aaa-88556102fdc3\") " pod="kube-flannel/kube-flannel-ds-2vwpp" May 9 00:07:37.534910 kubelet[2483]: E0509 00:07:37.534766 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:37.535560 containerd[1445]: time="2025-05-09T00:07:37.535432279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l86d7,Uid:c68a65e2-2abc-45db-b320-76acbee81ea0,Namespace:kube-system,Attempt:0,}" May 9 00:07:37.541034 kubelet[2483]: E0509 00:07:37.540991 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:37.541694 containerd[1445]: time="2025-05-09T00:07:37.541620487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2vwpp,Uid:3ed8f069-da1a-4410-9aaa-88556102fdc3,Namespace:kube-flannel,Attempt:0,}" May 9 00:07:37.558033 containerd[1445]: time="2025-05-09T00:07:37.557884887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:37.558179 containerd[1445]: time="2025-05-09T00:07:37.558020298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:37.558179 containerd[1445]: time="2025-05-09T00:07:37.558041746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:37.558380 containerd[1445]: time="2025-05-09T00:07:37.558296562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:37.570036 containerd[1445]: time="2025-05-09T00:07:37.569666280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:37.570036 containerd[1445]: time="2025-05-09T00:07:37.569714178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:37.570036 containerd[1445]: time="2025-05-09T00:07:37.569745710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:37.570036 containerd[1445]: time="2025-05-09T00:07:37.569835063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:37.577738 systemd[1]: Started cri-containerd-eda065e86a4261a2f82ef71257d73903ddacf41d92eac74433c7c99378b35690.scope - libcontainer container eda065e86a4261a2f82ef71257d73903ddacf41d92eac74433c7c99378b35690. May 9 00:07:37.583456 systemd[1]: Started cri-containerd-f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557.scope - libcontainer container f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557. May 9 00:07:37.600569 containerd[1445]: time="2025-05-09T00:07:37.600137586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l86d7,Uid:c68a65e2-2abc-45db-b320-76acbee81ea0,Namespace:kube-system,Attempt:0,} returns sandbox id \"eda065e86a4261a2f82ef71257d73903ddacf41d92eac74433c7c99378b35690\"" May 9 00:07:37.601736 kubelet[2483]: E0509 00:07:37.600813 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:37.602621 containerd[1445]: time="2025-05-09T00:07:37.602528365Z" level=info msg="CreateContainer within sandbox \"eda065e86a4261a2f82ef71257d73903ddacf41d92eac74433c7c99378b35690\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:07:37.617801 containerd[1445]: time="2025-05-09T00:07:37.616966958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2vwpp,Uid:3ed8f069-da1a-4410-9aaa-88556102fdc3,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\"" May 9 00:07:37.617801 containerd[1445]: time="2025-05-09T00:07:37.617052990Z" level=info msg="CreateContainer within sandbox \"eda065e86a4261a2f82ef71257d73903ddacf41d92eac74433c7c99378b35690\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5638ab9cd977f65e0fead9ee00faa06005583053b1289662a00db77c160aac4b\"" May 9 00:07:37.617935 kubelet[2483]: E0509 00:07:37.617494 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:37.619175 containerd[1445]: time="2025-05-09T00:07:37.618930297Z" level=info msg="StartContainer for \"5638ab9cd977f65e0fead9ee00faa06005583053b1289662a00db77c160aac4b\"" May 9 00:07:37.619615 containerd[1445]: time="2025-05-09T00:07:37.619527561Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 9 00:07:37.640783 systemd[1]: Started cri-containerd-5638ab9cd977f65e0fead9ee00faa06005583053b1289662a00db77c160aac4b.scope - libcontainer container 5638ab9cd977f65e0fead9ee00faa06005583053b1289662a00db77c160aac4b. May 9 00:07:37.672061 containerd[1445]: time="2025-05-09T00:07:37.672014791Z" level=info msg="StartContainer for \"5638ab9cd977f65e0fead9ee00faa06005583053b1289662a00db77c160aac4b\" returns successfully" May 9 00:07:38.210179 kubelet[2483]: E0509 00:07:38.210152 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:38.696530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458130592.mount: Deactivated successfully. May 9 00:07:38.725154 containerd[1445]: time="2025-05-09T00:07:38.725094014Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:38.725921 containerd[1445]: time="2025-05-09T00:07:38.725752249Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" May 9 00:07:38.726556 containerd[1445]: time="2025-05-09T00:07:38.726422768Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:38.728871 containerd[1445]: time="2025-05-09T00:07:38.728843553Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:38.729872 containerd[1445]: time="2025-05-09T00:07:38.729762481Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.110205148s" May 9 00:07:38.729872 containerd[1445]: time="2025-05-09T00:07:38.729794252Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 9 00:07:38.731682 containerd[1445]: time="2025-05-09T00:07:38.731639111Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 9 00:07:38.742826 containerd[1445]: time="2025-05-09T00:07:38.742747597Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0\"" May 9 00:07:38.743618 containerd[1445]: time="2025-05-09T00:07:38.743453649Z" level=info msg="StartContainer for \"5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0\"" May 9 00:07:38.771772 systemd[1]: Started cri-containerd-5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0.scope - libcontainer container 5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0. May 9 00:07:38.798172 containerd[1445]: time="2025-05-09T00:07:38.797889286Z" level=info msg="StartContainer for \"5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0\" returns successfully" May 9 00:07:38.808092 systemd[1]: cri-containerd-5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0.scope: Deactivated successfully. May 9 00:07:38.854700 containerd[1445]: time="2025-05-09T00:07:38.854590572Z" level=info msg="shim disconnected" id=5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0 namespace=k8s.io May 9 00:07:38.854882 containerd[1445]: time="2025-05-09T00:07:38.854708454Z" level=warning msg="cleaning up after shim disconnected" id=5ec6fdcabee385028ce9ccdef0cf24d7639359809f16a8c1873124144f35cbc0 namespace=k8s.io May 9 00:07:38.854882 containerd[1445]: time="2025-05-09T00:07:38.854720378Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:07:39.212988 kubelet[2483]: E0509 00:07:39.212955 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:39.214106 containerd[1445]: time="2025-05-09T00:07:39.214030400Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 9 00:07:39.224388 kubelet[2483]: I0509 00:07:39.224091 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l86d7" podStartSLOduration=2.224073325 podStartE2EDuration="2.224073325s" podCreationTimestamp="2025-05-09 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:38.21894585 +0000 UTC m=+9.146414134" watchObservedRunningTime="2025-05-09 00:07:39.224073325 +0000 UTC m=+10.151541569" May 9 00:07:40.146989 update_engine[1426]: I20250509 00:07:40.146916 1426 update_attempter.cc:509] Updating boot flags... May 9 00:07:40.165634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2874) May 9 00:07:40.193691 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2877) May 9 00:07:40.563752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131986758.mount: Deactivated successfully. May 9 00:07:41.170058 containerd[1445]: time="2025-05-09T00:07:41.170012145Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:41.170891 containerd[1445]: time="2025-05-09T00:07:41.170432474Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 9 00:07:41.171896 containerd[1445]: time="2025-05-09T00:07:41.171743595Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:41.176956 containerd[1445]: time="2025-05-09T00:07:41.175553162Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:07:41.176956 containerd[1445]: time="2025-05-09T00:07:41.176712798Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.962638342s" May 9 00:07:41.176956 containerd[1445]: time="2025-05-09T00:07:41.176740126Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 9 00:07:41.184959 containerd[1445]: time="2025-05-09T00:07:41.184913470Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:07:41.192709 containerd[1445]: time="2025-05-09T00:07:41.192568895Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246\"" May 9 00:07:41.194028 containerd[1445]: time="2025-05-09T00:07:41.193197127Z" level=info msg="StartContainer for \"b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246\"" May 9 00:07:41.217736 systemd[1]: Started cri-containerd-b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246.scope - libcontainer container b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246. May 9 00:07:41.242830 containerd[1445]: time="2025-05-09T00:07:41.242731140Z" level=info msg="StartContainer for \"b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246\" returns successfully" May 9 00:07:41.254996 systemd[1]: cri-containerd-b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246.scope: Deactivated successfully. May 9 00:07:41.326140 kubelet[2483]: I0509 00:07:41.326107 2483 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 00:07:41.372345 containerd[1445]: time="2025-05-09T00:07:41.372210042Z" level=info msg="shim disconnected" id=b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246 namespace=k8s.io May 9 00:07:41.372345 containerd[1445]: time="2025-05-09T00:07:41.372334680Z" level=warning msg="cleaning up after shim disconnected" id=b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246 namespace=k8s.io May 9 00:07:41.372345 containerd[1445]: time="2025-05-09T00:07:41.372346924Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:07:41.373256 systemd[1]: Created slice kubepods-burstable-poda0b080ed_2beb_4e8d_8bab_88e34026ae75.slice - libcontainer container kubepods-burstable-poda0b080ed_2beb_4e8d_8bab_88e34026ae75.slice. May 9 00:07:41.380452 systemd[1]: Created slice kubepods-burstable-podaeb8546f_6261_47ea_8c9b_5e81ebb70e69.slice - libcontainer container kubepods-burstable-podaeb8546f_6261_47ea_8c9b_5e81ebb70e69.slice. May 9 00:07:41.534523 kubelet[2483]: I0509 00:07:41.534390 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb8546f-6261-47ea-8c9b-5e81ebb70e69-config-volume\") pod \"coredns-668d6bf9bc-nh9vp\" (UID: \"aeb8546f-6261-47ea-8c9b-5e81ebb70e69\") " pod="kube-system/coredns-668d6bf9bc-nh9vp" May 9 00:07:41.534523 kubelet[2483]: I0509 00:07:41.534437 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76r6\" (UniqueName: \"kubernetes.io/projected/aeb8546f-6261-47ea-8c9b-5e81ebb70e69-kube-api-access-x76r6\") pod \"coredns-668d6bf9bc-nh9vp\" (UID: \"aeb8546f-6261-47ea-8c9b-5e81ebb70e69\") " pod="kube-system/coredns-668d6bf9bc-nh9vp" May 9 00:07:41.534523 kubelet[2483]: I0509 00:07:41.534463 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0b080ed-2beb-4e8d-8bab-88e34026ae75-config-volume\") pod \"coredns-668d6bf9bc-5gc5p\" (UID: \"a0b080ed-2beb-4e8d-8bab-88e34026ae75\") " pod="kube-system/coredns-668d6bf9bc-5gc5p" May 9 00:07:41.534523 kubelet[2483]: I0509 00:07:41.534482 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnxp\" (UniqueName: \"kubernetes.io/projected/a0b080ed-2beb-4e8d-8bab-88e34026ae75-kube-api-access-pjnxp\") pod \"coredns-668d6bf9bc-5gc5p\" (UID: \"a0b080ed-2beb-4e8d-8bab-88e34026ae75\") " pod="kube-system/coredns-668d6bf9bc-5gc5p" May 9 00:07:41.563650 systemd[1]: run-containerd-runc-k8s.io-b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246-runc.Jqyyn9.mount: Deactivated successfully. May 9 00:07:41.563745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ac0bc4e47e61dcca5ffbf0b821cc68d7a6e92bc7bcfc08a6ce45306550a246-rootfs.mount: Deactivated successfully. May 9 00:07:41.677176 kubelet[2483]: E0509 00:07:41.677134 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:41.677742 containerd[1445]: time="2025-05-09T00:07:41.677694858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5gc5p,Uid:a0b080ed-2beb-4e8d-8bab-88e34026ae75,Namespace:kube-system,Attempt:0,}" May 9 00:07:41.686286 kubelet[2483]: E0509 00:07:41.685468 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:41.686375 containerd[1445]: time="2025-05-09T00:07:41.685992319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nh9vp,Uid:aeb8546f-6261-47ea-8c9b-5e81ebb70e69,Namespace:kube-system,Attempt:0,}" May 9 00:07:41.765440 containerd[1445]: time="2025-05-09T00:07:41.765269043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nh9vp,Uid:aeb8546f-6261-47ea-8c9b-5e81ebb70e69,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:07:41.765573 kubelet[2483]: E0509 00:07:41.765531 2483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:07:41.765854 kubelet[2483]: E0509 00:07:41.765623 2483 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nh9vp" May 9 00:07:41.765854 kubelet[2483]: E0509 00:07:41.765650 2483 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nh9vp" May 9 00:07:41.765854 kubelet[2483]: E0509 00:07:41.765696 2483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nh9vp_kube-system(aeb8546f-6261-47ea-8c9b-5e81ebb70e69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nh9vp_kube-system(aeb8546f-6261-47ea-8c9b-5e81ebb70e69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-nh9vp" podUID="aeb8546f-6261-47ea-8c9b-5e81ebb70e69" May 9 00:07:41.766187 containerd[1445]: time="2025-05-09T00:07:41.766084093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5gc5p,Uid:a0b080ed-2beb-4e8d-8bab-88e34026ae75,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:07:41.766277 kubelet[2483]: E0509 00:07:41.766246 2483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:07:41.766315 kubelet[2483]: E0509 00:07:41.766289 2483 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-5gc5p" May 9 00:07:41.766315 kubelet[2483]: E0509 00:07:41.766305 2483 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-5gc5p" May 9 00:07:41.766370 kubelet[2483]: E0509 00:07:41.766330 2483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5gc5p_kube-system(a0b080ed-2beb-4e8d-8bab-88e34026ae75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5gc5p_kube-system(a0b080ed-2beb-4e8d-8bab-88e34026ae75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-5gc5p" podUID="a0b080ed-2beb-4e8d-8bab-88e34026ae75" May 9 00:07:42.219499 kubelet[2483]: E0509 00:07:42.219456 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:42.222418 containerd[1445]: time="2025-05-09T00:07:42.222307924Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 9 00:07:42.235242 containerd[1445]: time="2025-05-09T00:07:42.235193680Z" level=info msg="CreateContainer within sandbox \"f1200704e9d6efb903eac7f4194da78dab28fc240a8fce85065d7c23380cb557\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"3a92584d944420b6108a7313de5464c2eda107f3af69d30a328f017dd827d329\"" May 9 00:07:42.235694 containerd[1445]: time="2025-05-09T00:07:42.235669099Z" level=info msg="StartContainer for \"3a92584d944420b6108a7313de5464c2eda107f3af69d30a328f017dd827d329\"" May 9 00:07:42.261757 systemd[1]: Started cri-containerd-3a92584d944420b6108a7313de5464c2eda107f3af69d30a328f017dd827d329.scope - libcontainer container 3a92584d944420b6108a7313de5464c2eda107f3af69d30a328f017dd827d329. May 9 00:07:42.286073 containerd[1445]: time="2025-05-09T00:07:42.286018014Z" level=info msg="StartContainer for \"3a92584d944420b6108a7313de5464c2eda107f3af69d30a328f017dd827d329\" returns successfully" May 9 00:07:42.564538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56686782dbb40c2dfe8f63d393cc16b62b80a8e5767f29623476dbe072083e59-shm.mount: Deactivated successfully. May 9 00:07:42.564661 systemd[1]: run-netns-cni\x2d92d598bd\x2da831\x2d247b\x2d687b\x2df2c080675bdf.mount: Deactivated successfully. May 9 00:07:42.564724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7aa13c9b682d08deb2260319d629b0d7bfa92f72207926287782904ffbef00e2-shm.mount: Deactivated successfully. May 9 00:07:43.228716 kubelet[2483]: E0509 00:07:43.228632 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:43.239090 kubelet[2483]: I0509 00:07:43.239024 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2vwpp" podStartSLOduration=2.680123663 podStartE2EDuration="6.239006597s" podCreationTimestamp="2025-05-09 00:07:37 +0000 UTC" firstStartedPulling="2025-05-09 00:07:37.618708853 +0000 UTC m=+8.546177097" lastFinishedPulling="2025-05-09 00:07:41.177591787 +0000 UTC m=+12.105060031" observedRunningTime="2025-05-09 00:07:43.238868998 +0000 UTC m=+14.166337282" watchObservedRunningTime="2025-05-09 00:07:43.239006597 +0000 UTC m=+14.166474881" May 9 00:07:43.360811 systemd-networkd[1381]: flannel.1: Link UP May 9 00:07:43.360820 systemd-networkd[1381]: flannel.1: Gained carrier May 9 00:07:44.230674 kubelet[2483]: E0509 00:07:44.230337 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:44.943809 systemd-networkd[1381]: flannel.1: Gained IPv6LL May 9 00:07:53.162832 kubelet[2483]: E0509 00:07:53.162789 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:53.163325 containerd[1445]: time="2025-05-09T00:07:53.163178811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5gc5p,Uid:a0b080ed-2beb-4e8d-8bab-88e34026ae75,Namespace:kube-system,Attempt:0,}" May 9 00:07:53.193618 systemd-networkd[1381]: cni0: Link UP May 9 00:07:53.193627 systemd-networkd[1381]: cni0: Gained carrier May 9 00:07:53.196133 systemd-networkd[1381]: cni0: Lost carrier May 9 00:07:53.199316 systemd-networkd[1381]: veth9ab1a5b7: Link UP May 9 00:07:53.203624 kernel: cni0: port 1(veth9ab1a5b7) entered blocking state May 9 00:07:53.203704 kernel: cni0: port 1(veth9ab1a5b7) entered disabled state May 9 00:07:53.203723 kernel: veth9ab1a5b7: entered allmulticast mode May 9 00:07:53.203740 kernel: veth9ab1a5b7: entered promiscuous mode May 9 00:07:53.203756 kernel: cni0: port 1(veth9ab1a5b7) entered blocking state May 9 00:07:53.203780 kernel: cni0: port 1(veth9ab1a5b7) entered forwarding state May 9 00:07:53.203797 kernel: cni0: port 1(veth9ab1a5b7) entered disabled state May 9 00:07:53.214880 kernel: cni0: port 1(veth9ab1a5b7) entered blocking state May 9 00:07:53.215017 kernel: cni0: port 1(veth9ab1a5b7) entered forwarding state May 9 00:07:53.214928 systemd-networkd[1381]: veth9ab1a5b7: Gained carrier May 9 00:07:53.215139 systemd-networkd[1381]: cni0: Gained carrier May 9 00:07:53.217710 containerd[1445]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 9 00:07:53.217710 containerd[1445]: delegateAdd: netconf sent to delegate plugin: May 9 00:07:53.238234 containerd[1445]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:07:53.238128573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:53.238234 containerd[1445]: time="2025-05-09T00:07:53.238192664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:53.238565 containerd[1445]: time="2025-05-09T00:07:53.238306884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:53.238565 containerd[1445]: time="2025-05-09T00:07:53.238407422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:53.252865 systemd[1]: Started cri-containerd-b3d552094d41660804f5f73934284d5b9c481716aee356badc6ef345aefe785d.scope - libcontainer container b3d552094d41660804f5f73934284d5b9c481716aee356badc6ef345aefe785d. May 9 00:07:53.264742 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:07:53.281388 containerd[1445]: time="2025-05-09T00:07:53.281353518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5gc5p,Uid:a0b080ed-2beb-4e8d-8bab-88e34026ae75,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d552094d41660804f5f73934284d5b9c481716aee356badc6ef345aefe785d\"" May 9 00:07:53.282158 kubelet[2483]: E0509 00:07:53.282129 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:53.284118 containerd[1445]: time="2025-05-09T00:07:53.284089606Z" level=info msg="CreateContainer within sandbox \"b3d552094d41660804f5f73934284d5b9c481716aee356badc6ef345aefe785d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:07:53.304841 containerd[1445]: time="2025-05-09T00:07:53.304796697Z" level=info msg="CreateContainer within sandbox \"b3d552094d41660804f5f73934284d5b9c481716aee356badc6ef345aefe785d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"308ade66592e7334906a35c8d8b71ae38bbad35dcebb6430bc5ca828cf2ed4a2\"" May 9 00:07:53.305482 containerd[1445]: time="2025-05-09T00:07:53.305457735Z" level=info msg="StartContainer for \"308ade66592e7334906a35c8d8b71ae38bbad35dcebb6430bc5ca828cf2ed4a2\"" May 9 00:07:53.333751 systemd[1]: Started cri-containerd-308ade66592e7334906a35c8d8b71ae38bbad35dcebb6430bc5ca828cf2ed4a2.scope - libcontainer container 308ade66592e7334906a35c8d8b71ae38bbad35dcebb6430bc5ca828cf2ed4a2. May 9 00:07:53.360359 containerd[1445]: time="2025-05-09T00:07:53.360303472Z" level=info msg="StartContainer for \"308ade66592e7334906a35c8d8b71ae38bbad35dcebb6430bc5ca828cf2ed4a2\" returns successfully" May 9 00:07:54.140346 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:54792.service - OpenSSH per-connection server daemon (10.0.0.1:54792). May 9 00:07:54.187011 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 54792 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:54.189309 sshd-session[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:54.193153 systemd-logind[1423]: New session 6 of user core. May 9 00:07:54.203743 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:07:54.256299 kubelet[2483]: E0509 00:07:54.256085 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:54.281584 kubelet[2483]: I0509 00:07:54.281512 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5gc5p" podStartSLOduration=17.281493227 podStartE2EDuration="17.281493227s" podCreationTimestamp="2025-05-09 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:54.270576755 +0000 UTC m=+25.198045039" watchObservedRunningTime="2025-05-09 00:07:54.281493227 +0000 UTC m=+25.208961511" May 9 00:07:54.347090 sshd[3320]: Connection closed by 10.0.0.1 port 54792 May 9 00:07:54.347449 sshd-session[3318]: pam_unix(sshd:session): session closed for user core May 9 00:07:54.351202 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:54792.service: Deactivated successfully. May 9 00:07:54.352844 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:07:54.353390 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. May 9 00:07:54.354173 systemd-logind[1423]: Removed session 6. May 9 00:07:54.415727 systemd-networkd[1381]: veth9ab1a5b7: Gained IPv6LL May 9 00:07:55.055732 systemd-networkd[1381]: cni0: Gained IPv6LL May 9 00:07:55.257756 kubelet[2483]: E0509 00:07:55.257726 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:56.162326 kubelet[2483]: E0509 00:07:56.162296 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:56.163183 containerd[1445]: time="2025-05-09T00:07:56.162779940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nh9vp,Uid:aeb8546f-6261-47ea-8c9b-5e81ebb70e69,Namespace:kube-system,Attempt:0,}" May 9 00:07:56.184090 systemd-networkd[1381]: vetha73f25e3: Link UP May 9 00:07:56.185713 kernel: cni0: port 2(vetha73f25e3) entered blocking state May 9 00:07:56.186022 kernel: cni0: port 2(vetha73f25e3) entered disabled state May 9 00:07:56.186410 kernel: vetha73f25e3: entered allmulticast mode May 9 00:07:56.187612 kernel: vetha73f25e3: entered promiscuous mode May 9 00:07:56.193925 kernel: cni0: port 2(vetha73f25e3) entered blocking state May 9 00:07:56.193989 kernel: cni0: port 2(vetha73f25e3) entered forwarding state May 9 00:07:56.193947 systemd-networkd[1381]: vetha73f25e3: Gained carrier May 9 00:07:56.199342 containerd[1445]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} May 9 00:07:56.199342 containerd[1445]: delegateAdd: netconf sent to delegate plugin: May 9 00:07:56.222737 containerd[1445]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:07:56.222644658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:07:56.222737 containerd[1445]: time="2025-05-09T00:07:56.222703108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:07:56.223078 containerd[1445]: time="2025-05-09T00:07:56.222939185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:56.223133 containerd[1445]: time="2025-05-09T00:07:56.223054844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:07:56.246778 systemd[1]: Started cri-containerd-be9b7f75db80119fa0ef47ed01fd9598f04da5fda8ad387c92880fd5ed103cb2.scope - libcontainer container be9b7f75db80119fa0ef47ed01fd9598f04da5fda8ad387c92880fd5ed103cb2. May 9 00:07:56.256907 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:07:56.259002 kubelet[2483]: E0509 00:07:56.258971 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:56.275802 containerd[1445]: time="2025-05-09T00:07:56.275766184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nh9vp,Uid:aeb8546f-6261-47ea-8c9b-5e81ebb70e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"be9b7f75db80119fa0ef47ed01fd9598f04da5fda8ad387c92880fd5ed103cb2\"" May 9 00:07:56.276389 kubelet[2483]: E0509 00:07:56.276368 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:56.278357 containerd[1445]: time="2025-05-09T00:07:56.278326351Z" level=info msg="CreateContainer within sandbox \"be9b7f75db80119fa0ef47ed01fd9598f04da5fda8ad387c92880fd5ed103cb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:07:56.293139 containerd[1445]: time="2025-05-09T00:07:56.293095659Z" level=info msg="CreateContainer within sandbox \"be9b7f75db80119fa0ef47ed01fd9598f04da5fda8ad387c92880fd5ed103cb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71892d96f0d42f563e1500d3e52e7d637833aaedc954305fa00400b13749310f\"" May 9 00:07:56.293574 containerd[1445]: time="2025-05-09T00:07:56.293513646Z" level=info msg="StartContainer for \"71892d96f0d42f563e1500d3e52e7d637833aaedc954305fa00400b13749310f\"" May 9 00:07:56.321840 systemd[1]: Started cri-containerd-71892d96f0d42f563e1500d3e52e7d637833aaedc954305fa00400b13749310f.scope - libcontainer container 71892d96f0d42f563e1500d3e52e7d637833aaedc954305fa00400b13749310f. May 9 00:07:56.344268 containerd[1445]: time="2025-05-09T00:07:56.344226629Z" level=info msg="StartContainer for \"71892d96f0d42f563e1500d3e52e7d637833aaedc954305fa00400b13749310f\" returns successfully" May 9 00:07:57.186948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603936411.mount: Deactivated successfully. May 9 00:07:57.262753 kubelet[2483]: E0509 00:07:57.262705 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:57.281356 kubelet[2483]: I0509 00:07:57.281223 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nh9vp" podStartSLOduration=20.281203611 podStartE2EDuration="20.281203611s" podCreationTimestamp="2025-05-09 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:07:57.27180353 +0000 UTC m=+28.199271814" watchObservedRunningTime="2025-05-09 00:07:57.281203611 +0000 UTC m=+28.208671895" May 9 00:07:57.423785 systemd-networkd[1381]: vetha73f25e3: Gained IPv6LL May 9 00:07:58.263458 kubelet[2483]: E0509 00:07:58.263412 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:07:59.374344 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:54794.service - OpenSSH per-connection server daemon (10.0.0.1:54794). May 9 00:07:59.430313 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 54794 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:07:59.435116 sshd-session[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:07:59.441182 systemd-logind[1423]: New session 7 of user core. May 9 00:07:59.453246 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:07:59.586448 sshd[3486]: Connection closed by 10.0.0.1 port 54794 May 9 00:07:59.587017 sshd-session[3484]: pam_unix(sshd:session): session closed for user core May 9 00:07:59.591547 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:54794.service: Deactivated successfully. May 9 00:07:59.594528 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:07:59.596697 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. May 9 00:07:59.599168 systemd-logind[1423]: Removed session 7. May 9 00:08:04.597430 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:57582.service - OpenSSH per-connection server daemon (10.0.0.1:57582). May 9 00:08:04.672751 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 57582 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:04.673881 sshd-session[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:04.677569 systemd-logind[1423]: New session 8 of user core. May 9 00:08:04.691818 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:08:04.799427 sshd[3522]: Connection closed by 10.0.0.1 port 57582 May 9 00:08:04.800797 sshd-session[3520]: pam_unix(sshd:session): session closed for user core May 9 00:08:04.812032 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:57582.service: Deactivated successfully. May 9 00:08:04.813708 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:08:04.815778 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. May 9 00:08:04.828909 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:57584.service - OpenSSH per-connection server daemon (10.0.0.1:57584). May 9 00:08:04.830450 systemd-logind[1423]: Removed session 8. May 9 00:08:04.869658 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 57584 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:04.870762 sshd-session[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:04.874488 systemd-logind[1423]: New session 9 of user core. May 9 00:08:04.887799 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:08:05.030957 sshd[3538]: Connection closed by 10.0.0.1 port 57584 May 9 00:08:05.032715 sshd-session[3536]: pam_unix(sshd:session): session closed for user core May 9 00:08:05.041810 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:57584.service: Deactivated successfully. May 9 00:08:05.046701 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:08:05.048126 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. May 9 00:08:05.057782 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:57594.service - OpenSSH per-connection server daemon (10.0.0.1:57594). May 9 00:08:05.060778 systemd-logind[1423]: Removed session 9. May 9 00:08:05.101301 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 57594 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:05.102040 sshd-session[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:05.105401 systemd-logind[1423]: New session 10 of user core. May 9 00:08:05.113743 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:08:05.225653 sshd[3550]: Connection closed by 10.0.0.1 port 57594 May 9 00:08:05.225476 sshd-session[3548]: pam_unix(sshd:session): session closed for user core May 9 00:08:05.229845 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:57594.service: Deactivated successfully. May 9 00:08:05.231949 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:08:05.232673 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. May 9 00:08:05.233475 systemd-logind[1423]: Removed session 10. May 9 00:08:10.238225 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:57610.service - OpenSSH per-connection server daemon (10.0.0.1:57610). May 9 00:08:10.292734 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 57610 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:10.294013 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:10.297854 systemd-logind[1423]: New session 11 of user core. May 9 00:08:10.309800 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:08:10.425867 sshd[3588]: Connection closed by 10.0.0.1 port 57610 May 9 00:08:10.427583 sshd-session[3586]: pam_unix(sshd:session): session closed for user core May 9 00:08:10.437221 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:57610.service: Deactivated successfully. May 9 00:08:10.439813 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:08:10.441427 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. May 9 00:08:10.442342 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). May 9 00:08:10.443329 systemd-logind[1423]: Removed session 11. May 9 00:08:10.502504 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:10.504483 sshd-session[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:10.508560 systemd-logind[1423]: New session 12 of user core. May 9 00:08:10.514807 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:08:10.736003 sshd[3602]: Connection closed by 10.0.0.1 port 57624 May 9 00:08:10.736938 sshd-session[3600]: pam_unix(sshd:session): session closed for user core May 9 00:08:10.754390 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:57624.service: Deactivated successfully. May 9 00:08:10.756507 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:08:10.761037 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. May 9 00:08:10.762932 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). May 9 00:08:10.765226 systemd-logind[1423]: Removed session 12. May 9 00:08:10.809311 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:10.810822 sshd-session[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:10.815018 systemd-logind[1423]: New session 13 of user core. May 9 00:08:10.824859 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:08:11.594406 sshd[3615]: Connection closed by 10.0.0.1 port 57632 May 9 00:08:11.596367 sshd-session[3613]: pam_unix(sshd:session): session closed for user core May 9 00:08:11.613287 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:57632.service: Deactivated successfully. May 9 00:08:11.618423 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:08:11.622473 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. May 9 00:08:11.633347 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:57638.service - OpenSSH per-connection server daemon (10.0.0.1:57638). May 9 00:08:11.638538 systemd-logind[1423]: Removed session 13. May 9 00:08:11.681592 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 57638 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:11.682995 sshd-session[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:11.687492 systemd-logind[1423]: New session 14 of user core. May 9 00:08:11.698787 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:08:11.906652 sshd[3635]: Connection closed by 10.0.0.1 port 57638 May 9 00:08:11.906924 sshd-session[3633]: pam_unix(sshd:session): session closed for user core May 9 00:08:11.915415 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:57638.service: Deactivated successfully. May 9 00:08:11.917155 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:08:11.919056 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. May 9 00:08:11.936904 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:57640.service - OpenSSH per-connection server daemon (10.0.0.1:57640). May 9 00:08:11.937819 systemd-logind[1423]: Removed session 14. May 9 00:08:11.977284 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 57640 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:11.978554 sshd-session[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:11.982249 systemd-logind[1423]: New session 15 of user core. May 9 00:08:11.990786 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:08:12.096621 sshd[3647]: Connection closed by 10.0.0.1 port 57640 May 9 00:08:12.096956 sshd-session[3645]: pam_unix(sshd:session): session closed for user core May 9 00:08:12.100557 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:57640.service: Deactivated successfully. May 9 00:08:12.103311 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:08:12.104182 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. May 9 00:08:12.105073 systemd-logind[1423]: Removed session 15. May 9 00:08:17.112075 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). May 9 00:08:17.155753 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:17.156888 sshd-session[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:17.160819 systemd-logind[1423]: New session 16 of user core. May 9 00:08:17.170749 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:08:17.283915 sshd[3685]: Connection closed by 10.0.0.1 port 51658 May 9 00:08:17.284647 sshd-session[3683]: pam_unix(sshd:session): session closed for user core May 9 00:08:17.287760 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:51658.service: Deactivated successfully. May 9 00:08:17.289369 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:08:17.290042 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. May 9 00:08:17.290790 systemd-logind[1423]: Removed session 16. May 9 00:08:22.295972 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668). May 9 00:08:22.354765 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:22.356125 sshd-session[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:22.360506 systemd-logind[1423]: New session 17 of user core. May 9 00:08:22.369841 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:08:22.477616 sshd[3720]: Connection closed by 10.0.0.1 port 51668 May 9 00:08:22.477192 sshd-session[3718]: pam_unix(sshd:session): session closed for user core May 9 00:08:22.480171 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:51668.service: Deactivated successfully. May 9 00:08:22.481829 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:08:22.482440 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. May 9 00:08:22.483611 systemd-logind[1423]: Removed session 17. May 9 00:08:27.492659 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:36166.service - OpenSSH per-connection server daemon (10.0.0.1:36166). May 9 00:08:27.537215 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 36166 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 9 00:08:27.538515 sshd-session[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:08:27.542853 systemd-logind[1423]: New session 18 of user core. May 9 00:08:27.555828 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:08:27.665839 sshd[3756]: Connection closed by 10.0.0.1 port 36166 May 9 00:08:27.667183 sshd-session[3754]: pam_unix(sshd:session): session closed for user core May 9 00:08:27.671298 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:36166.service: Deactivated successfully. May 9 00:08:27.672946 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:08:27.673660 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. May 9 00:08:27.675039 systemd-logind[1423]: Removed session 18.