May 14 23:39:46.900866 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:39:46.900904 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:39:46.900914 kernel: KASLR enabled May 14 23:39:46.900920 kernel: efi: EFI v2.7 by EDK II May 14 23:39:46.900926 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 14 23:39:46.900931 kernel: random: crng init done May 14 23:39:46.900938 kernel: secureboot: Secure boot disabled May 14 23:39:46.900944 kernel: ACPI: Early table checksum verification disabled May 14 23:39:46.900950 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 14 23:39:46.900957 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 23:39:46.900963 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.900969 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.900975 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.900981 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.900988 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.900995 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.901002 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.901008 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.901014 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:39:46.901020 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 23:39:46.901026 kernel: NUMA: Failed to initialise from firmware May 14 23:39:46.901033 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:39:46.901039 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 14 23:39:46.901045 kernel: Zone ranges: May 14 23:39:46.901051 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:39:46.901058 kernel: DMA32 empty May 14 23:39:46.901064 kernel: Normal empty May 14 23:39:46.901070 kernel: Movable zone start for each node May 14 23:39:46.901076 kernel: Early memory node ranges May 14 23:39:46.901082 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 14 23:39:46.901089 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 14 23:39:46.901095 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 14 23:39:46.901101 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 14 23:39:46.901107 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 14 23:39:46.901113 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 23:39:46.901119 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 23:39:46.901125 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 23:39:46.901133 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 23:39:46.901139 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:39:46.901145 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 23:39:46.901154 kernel: psci: probing for conduit method from ACPI. May 14 23:39:46.901161 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:39:46.901167 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:39:46.901176 kernel: psci: Trusted OS migration not required May 14 23:39:46.901182 kernel: psci: SMC Calling Convention v1.1 May 14 23:39:46.901199 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 23:39:46.901207 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:39:46.901213 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:39:46.901220 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 23:39:46.901227 kernel: Detected PIPT I-cache on CPU0 May 14 23:39:46.901233 kernel: CPU features: detected: GIC system register CPU interface May 14 23:39:46.901240 kernel: CPU features: detected: Hardware dirty bit management May 14 23:39:46.901246 kernel: CPU features: detected: Spectre-v4 May 14 23:39:46.901256 kernel: CPU features: detected: Spectre-BHB May 14 23:39:46.901262 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:39:46.901270 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:39:46.901276 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:39:46.901283 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:39:46.901289 kernel: alternatives: applying boot alternatives May 14 23:39:46.901296 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:39:46.901303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:39:46.901310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:39:46.901317 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:39:46.901323 kernel: Fallback order for Node 0: 0 May 14 23:39:46.901331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 23:39:46.901338 kernel: Policy zone: DMA May 14 23:39:46.901344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:39:46.901351 kernel: software IO TLB: area num 4. May 14 23:39:46.901357 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 14 23:39:46.901364 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 14 23:39:46.901371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:39:46.901377 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:39:46.901385 kernel: rcu: RCU event tracing is enabled. May 14 23:39:46.901391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:39:46.901398 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:39:46.901404 kernel: Tracing variant of Tasks RCU enabled. May 14 23:39:46.901413 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:39:46.901420 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:39:46.901426 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:39:46.901433 kernel: GICv3: 256 SPIs implemented May 14 23:39:46.901439 kernel: GICv3: 0 Extended SPIs implemented May 14 23:39:46.901445 kernel: Root IRQ handler: gic_handle_irq May 14 23:39:46.901461 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:39:46.901481 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 23:39:46.901488 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 23:39:46.901494 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:39:46.901501 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 14 23:39:46.901511 kernel: GICv3: using LPI property table @0x00000000400f0000 May 14 23:39:46.901517 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 14 23:39:46.901524 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:39:46.901531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:39:46.901537 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:39:46.901544 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:39:46.901550 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:39:46.901557 kernel: arm-pv: using stolen time PV May 14 23:39:46.901563 kernel: Console: colour dummy device 80x25 May 14 23:39:46.901570 kernel: ACPI: Core revision 20230628 May 14 23:39:46.901577 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:39:46.901585 kernel: pid_max: default: 32768 minimum: 301 May 14 23:39:46.901592 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:39:46.901598 kernel: landlock: Up and running. May 14 23:39:46.901606 kernel: SELinux: Initializing. May 14 23:39:46.901612 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:39:46.901619 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:39:46.901626 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 23:39:46.901633 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:39:46.901639 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:39:46.901647 kernel: rcu: Hierarchical SRCU implementation. May 14 23:39:46.901654 kernel: rcu: Max phase no-delay instances is 400. May 14 23:39:46.901661 kernel: Platform MSI: ITS@0x8080000 domain created May 14 23:39:46.901667 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 23:39:46.901674 kernel: Remapping and enabling EFI services. May 14 23:39:46.901680 kernel: smp: Bringing up secondary CPUs ... May 14 23:39:46.901687 kernel: Detected PIPT I-cache on CPU1 May 14 23:39:46.901694 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 23:39:46.901700 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 14 23:39:46.901709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:39:46.901716 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:39:46.901728 kernel: Detected PIPT I-cache on CPU2 May 14 23:39:46.901736 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 23:39:46.901743 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 14 23:39:46.901750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:39:46.901757 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 23:39:46.901764 kernel: Detected PIPT I-cache on CPU3 May 14 23:39:46.901771 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 23:39:46.901778 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 14 23:39:46.901787 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:39:46.901793 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 23:39:46.901800 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:39:46.901807 kernel: SMP: Total of 4 processors activated. May 14 23:39:46.901814 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:39:46.901821 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:39:46.901828 kernel: CPU features: detected: Common not Private translations May 14 23:39:46.901837 kernel: CPU features: detected: CRC32 instructions May 14 23:39:46.901844 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 23:39:46.901851 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:39:46.901858 kernel: CPU features: detected: LSE atomic instructions May 14 23:39:46.901865 kernel: CPU features: detected: Privileged Access Never May 14 23:39:46.901872 kernel: CPU features: detected: RAS Extension Support May 14 23:39:46.901879 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 23:39:46.901886 kernel: CPU: All CPU(s) started at EL1 May 14 23:39:46.901892 kernel: alternatives: applying system-wide alternatives May 14 23:39:46.901901 kernel: devtmpfs: initialized May 14 23:39:46.901908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:39:46.901915 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:39:46.901922 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:39:46.901929 kernel: SMBIOS 3.0.0 present. May 14 23:39:46.901936 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 23:39:46.901943 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:39:46.901950 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:39:46.901958 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:39:46.901966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:39:46.901973 kernel: audit: initializing netlink subsys (disabled) May 14 23:39:46.901981 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 14 23:39:46.901987 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:39:46.901994 kernel: cpuidle: using governor menu May 14 23:39:46.902001 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:39:46.902008 kernel: ASID allocator initialised with 32768 entries May 14 23:39:46.902015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:39:46.902022 kernel: Serial: AMBA PL011 UART driver May 14 23:39:46.902030 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:39:46.902038 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:39:46.902044 kernel: Modules: 509264 pages in range for PLT usage May 14 23:39:46.902051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:39:46.902058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:39:46.902065 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:39:46.902072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:39:46.902079 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:39:46.902087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:39:46.902094 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:39:46.902102 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:39:46.902110 kernel: ACPI: Added _OSI(Module Device) May 14 23:39:46.902118 kernel: ACPI: Added _OSI(Processor Device) May 14 23:39:46.902125 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:39:46.902132 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:39:46.902139 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:39:46.902146 kernel: ACPI: Interpreter enabled May 14 23:39:46.902152 kernel: ACPI: Using GIC for interrupt routing May 14 23:39:46.902159 kernel: ACPI: MCFG table detected, 1 entries May 14 23:39:46.902168 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 23:39:46.902175 kernel: printk: console [ttyAMA0] enabled May 14 23:39:46.902183 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:39:46.902341 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:39:46.902416 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:39:46.902576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:39:46.902646 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 23:39:46.902714 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 23:39:46.902723 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 23:39:46.902731 kernel: PCI host bridge to bus 0000:00 May 14 23:39:46.902800 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 23:39:46.902859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:39:46.902917 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 23:39:46.902974 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:39:46.903059 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 23:39:46.903134 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:39:46.903211 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 23:39:46.903279 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 23:39:46.903347 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:39:46.903429 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:39:46.903509 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 23:39:46.903582 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 23:39:46.903641 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 23:39:46.903699 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:39:46.903758 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 23:39:46.903767 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:39:46.903775 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:39:46.903782 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:39:46.903791 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:39:46.903798 kernel: iommu: Default domain type: Translated May 14 23:39:46.903805 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:39:46.903812 kernel: efivars: Registered efivars operations May 14 23:39:46.903819 kernel: vgaarb: loaded May 14 23:39:46.903827 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:39:46.903834 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:39:46.903841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:39:46.903848 kernel: pnp: PnP ACPI init May 14 23:39:46.903924 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 23:39:46.903935 kernel: pnp: PnP ACPI: found 1 devices May 14 23:39:46.903942 kernel: NET: Registered PF_INET protocol family May 14 23:39:46.903949 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:39:46.903956 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:39:46.903963 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:39:46.903971 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:39:46.903978 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:39:46.903985 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:39:46.903994 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:39:46.904002 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:39:46.904009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:39:46.904016 kernel: PCI: CLS 0 bytes, default 64 May 14 23:39:46.904023 kernel: kvm [1]: HYP mode not available May 14 23:39:46.904030 kernel: Initialise system trusted keyrings May 14 23:39:46.904037 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:39:46.904044 kernel: Key type asymmetric registered May 14 23:39:46.904050 kernel: Asymmetric key parser 'x509' registered May 14 23:39:46.904059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:39:46.904066 kernel: io scheduler mq-deadline registered May 14 23:39:46.904074 kernel: io scheduler kyber registered May 14 23:39:46.904081 kernel: io scheduler bfq registered May 14 23:39:46.904088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:39:46.904095 kernel: ACPI: button: Power Button [PWRB] May 14 23:39:46.904103 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:39:46.904167 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 23:39:46.904177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:39:46.904186 kernel: thunder_xcv, ver 1.0 May 14 23:39:46.904200 kernel: thunder_bgx, ver 1.0 May 14 23:39:46.904208 kernel: nicpf, ver 1.0 May 14 23:39:46.904215 kernel: nicvf, ver 1.0 May 14 23:39:46.904321 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:39:46.904396 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:39:46 UTC (1747265986) May 14 23:39:46.904406 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:39:46.904414 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 23:39:46.904425 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:39:46.904432 kernel: watchdog: Hard watchdog permanently disabled May 14 23:39:46.904439 kernel: NET: Registered PF_INET6 protocol family May 14 23:39:46.904446 kernel: Segment Routing with IPv6 May 14 23:39:46.904460 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:39:46.904479 kernel: NET: Registered PF_PACKET protocol family May 14 23:39:46.904486 kernel: Key type dns_resolver registered May 14 23:39:46.904493 kernel: registered taskstats version 1 May 14 23:39:46.904500 kernel: Loading compiled-in X.509 certificates May 14 23:39:46.904510 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:39:46.904517 kernel: Key type .fscrypt registered May 14 23:39:46.904524 kernel: Key type fscrypt-provisioning registered May 14 23:39:46.904531 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:39:46.904538 kernel: ima: Allocated hash algorithm: sha1 May 14 23:39:46.904545 kernel: ima: No architecture policies found May 14 23:39:46.904552 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:39:46.904559 kernel: clk: Disabling unused clocks May 14 23:39:46.904566 kernel: Freeing unused kernel memory: 38336K May 14 23:39:46.904575 kernel: Run /init as init process May 14 23:39:46.904582 kernel: with arguments: May 14 23:39:46.904589 kernel: /init May 14 23:39:46.904596 kernel: with environment: May 14 23:39:46.904603 kernel: HOME=/ May 14 23:39:46.904610 kernel: TERM=linux May 14 23:39:46.904617 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:39:46.904625 systemd[1]: Successfully made /usr/ read-only. May 14 23:39:46.904636 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:39:46.904645 systemd[1]: Detected virtualization kvm. May 14 23:39:46.904652 systemd[1]: Detected architecture arm64. May 14 23:39:46.904660 systemd[1]: Running in initrd. May 14 23:39:46.904667 systemd[1]: No hostname configured, using default hostname. May 14 23:39:46.904675 systemd[1]: Hostname set to . May 14 23:39:46.904683 systemd[1]: Initializing machine ID from VM UUID. May 14 23:39:46.904690 systemd[1]: Queued start job for default target initrd.target. May 14 23:39:46.904699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:39:46.904707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:39:46.904715 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:39:46.904723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:39:46.904731 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:39:46.904740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:39:46.904750 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:39:46.904758 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:39:46.904766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:39:46.904774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:39:46.904781 systemd[1]: Reached target paths.target - Path Units. May 14 23:39:46.904789 systemd[1]: Reached target slices.target - Slice Units. May 14 23:39:46.904796 systemd[1]: Reached target swap.target - Swaps. May 14 23:39:46.904804 systemd[1]: Reached target timers.target - Timer Units. May 14 23:39:46.904811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:39:46.904821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:39:46.904828 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:39:46.904836 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:39:46.904844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:39:46.904852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:39:46.904859 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:39:46.904867 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:39:46.904875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:39:46.904883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:39:46.904892 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:39:46.904900 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:39:46.904907 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:39:46.904915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:39:46.904923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:39:46.904931 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:39:46.904939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:39:46.904949 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:39:46.904957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:39:46.904985 systemd-journald[239]: Collecting audit messages is disabled. May 14 23:39:46.905006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:39:46.905014 systemd-journald[239]: Journal started May 14 23:39:46.905032 systemd-journald[239]: Runtime Journal (/run/log/journal/ca2871eeec234147a17b431c7edae834) is 5.9M, max 47.3M, 41.4M free. May 14 23:39:46.895639 systemd-modules-load[240]: Inserted module 'overlay' May 14 23:39:46.908927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:39:46.912495 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:39:46.912521 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:39:46.915117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:39:46.916808 kernel: Bridge firewalling registered May 14 23:39:46.915293 systemd-modules-load[240]: Inserted module 'br_netfilter' May 14 23:39:46.918166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:39:46.922567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:39:46.924183 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:39:46.927377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:39:46.936593 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:39:46.938216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:39:46.941587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:39:46.952661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:39:46.953921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:39:46.956794 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:39:46.970567 dracut-cmdline[282]: dracut-dracut-053 May 14 23:39:46.973104 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:39:46.985611 systemd-resolved[279]: Positive Trust Anchors: May 14 23:39:46.985630 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:39:46.985661 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:39:46.990510 systemd-resolved[279]: Defaulting to hostname 'linux'. May 14 23:39:46.991610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:39:46.995251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:39:47.043493 kernel: SCSI subsystem initialized May 14 23:39:47.048477 kernel: Loading iSCSI transport class v2.0-870. May 14 23:39:47.056495 kernel: iscsi: registered transport (tcp) May 14 23:39:47.069475 kernel: iscsi: registered transport (qla4xxx) May 14 23:39:47.069511 kernel: QLogic iSCSI HBA Driver May 14 23:39:47.113055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:39:47.120655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:39:47.137475 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:39:47.137535 kernel: device-mapper: uevent: version 1.0.3 May 14 23:39:47.141485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:39:47.188487 kernel: raid6: neonx8 gen() 15785 MB/s May 14 23:39:47.205477 kernel: raid6: neonx4 gen() 15780 MB/s May 14 23:39:47.222479 kernel: raid6: neonx2 gen() 13182 MB/s May 14 23:39:47.239470 kernel: raid6: neonx1 gen() 10500 MB/s May 14 23:39:47.256469 kernel: raid6: int64x8 gen() 6785 MB/s May 14 23:39:47.273473 kernel: raid6: int64x4 gen() 7344 MB/s May 14 23:39:47.290473 kernel: raid6: int64x2 gen() 6105 MB/s May 14 23:39:47.307719 kernel: raid6: int64x1 gen() 5039 MB/s May 14 23:39:47.307741 kernel: raid6: using algorithm neonx8 gen() 15785 MB/s May 14 23:39:47.325600 kernel: raid6: .... xor() 12011 MB/s, rmw enabled May 14 23:39:47.325621 kernel: raid6: using neon recovery algorithm May 14 23:39:47.347635 kernel: xor: measuring software checksum speed May 14 23:39:47.347697 kernel: 8regs : 21613 MB/sec May 14 23:39:47.348916 kernel: 32regs : 21658 MB/sec May 14 23:39:47.348934 kernel: arm64_neon : 27747 MB/sec May 14 23:39:47.348944 kernel: xor: using function: arm64_neon (27747 MB/sec) May 14 23:39:47.400490 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:39:47.412101 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:39:47.420685 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:39:47.435590 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 14 23:39:47.439344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:39:47.443390 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:39:47.458647 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 14 23:39:47.491206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:39:47.500665 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:39:47.551681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:39:47.560654 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:39:47.573777 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:39:47.578139 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:39:47.579492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:39:47.581919 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:39:47.592711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:39:47.604123 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:39:47.619273 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 23:39:47.619505 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:39:47.621883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:39:47.622017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:39:47.625434 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:39:47.637964 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:39:47.637989 kernel: GPT:9289727 != 19775487 May 14 23:39:47.638001 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:39:47.638011 kernel: GPT:9289727 != 19775487 May 14 23:39:47.638019 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:39:47.638030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:39:47.627010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:39:47.627233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:39:47.635153 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:39:47.646847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:39:47.660481 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (512) May 14 23:39:47.660534 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) May 14 23:39:47.664891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:39:47.673714 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:39:47.692201 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:39:47.700716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:39:47.707696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:39:47.708936 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:39:47.722643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:39:47.724588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:39:47.729467 disk-uuid[552]: Primary Header is updated. May 14 23:39:47.729467 disk-uuid[552]: Secondary Entries is updated. May 14 23:39:47.729467 disk-uuid[552]: Secondary Header is updated. May 14 23:39:47.734492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:39:47.751767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:39:48.748548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:39:48.749722 disk-uuid[553]: The operation has completed successfully. May 14 23:39:48.782402 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:39:48.782518 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:39:48.828624 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:39:48.832979 sh[573]: Success May 14 23:39:48.851301 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:39:48.878093 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:39:48.890944 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:39:48.895507 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:39:48.911486 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:39:48.911538 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:39:48.911549 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:39:48.912969 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:39:48.912994 kernel: BTRFS info (device dm-0): using free space tree May 14 23:39:48.918830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:39:48.920020 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:39:48.930677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:39:48.932845 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:39:48.952345 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:39:48.952399 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:39:48.952409 kernel: BTRFS info (device vda6): using free space tree May 14 23:39:48.955477 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:39:48.959483 kernel: BTRFS info (device vda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:39:48.963046 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:39:48.973112 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:39:49.053626 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:39:49.063676 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:39:49.086175 ignition[661]: Ignition 2.20.0 May 14 23:39:49.086191 ignition[661]: Stage: fetch-offline May 14 23:39:49.086226 ignition[661]: no configs at "/usr/lib/ignition/base.d" May 14 23:39:49.086234 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:49.086391 ignition[661]: parsed url from cmdline: "" May 14 23:39:49.086395 ignition[661]: no config URL provided May 14 23:39:49.086400 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:39:49.086407 ignition[661]: no config at "/usr/lib/ignition/user.ign" May 14 23:39:49.091661 systemd-networkd[760]: lo: Link UP May 14 23:39:49.086431 ignition[661]: op(1): [started] loading QEMU firmware config module May 14 23:39:49.091664 systemd-networkd[760]: lo: Gained carrier May 14 23:39:49.086435 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:39:49.092442 systemd-networkd[760]: Enumeration completed May 14 23:39:49.092901 ignition[661]: op(1): [finished] loading QEMU firmware config module May 14 23:39:49.092620 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:39:49.093476 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:39:49.093479 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:39:49.094106 systemd-networkd[760]: eth0: Link UP May 14 23:39:49.094109 systemd-networkd[760]: eth0: Gained carrier May 14 23:39:49.094116 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:39:49.096463 systemd[1]: Reached target network.target - Network. May 14 23:39:49.113504 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:39:49.125533 ignition[661]: parsing config with SHA512: e8afa809dc927f034c73560372a17719775e15cd2e7ff9b8789846fd07e4b8fd4116db5d93e12dce2059feff42ea4d5b5e7f143221861e7287f02b3cae4db8fe May 14 23:39:49.133118 unknown[661]: fetched base config from "system" May 14 23:39:49.133133 unknown[661]: fetched user config from "qemu" May 14 23:39:49.133786 ignition[661]: fetch-offline: fetch-offline passed May 14 23:39:49.134238 ignition[661]: Ignition finished successfully May 14 23:39:49.136427 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:39:49.138520 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:39:49.153626 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:39:49.166239 ignition[768]: Ignition 2.20.0 May 14 23:39:49.166249 ignition[768]: Stage: kargs May 14 23:39:49.166425 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 14 23:39:49.166434 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:49.167290 ignition[768]: kargs: kargs passed May 14 23:39:49.167337 ignition[768]: Ignition finished successfully May 14 23:39:49.171854 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:39:49.187698 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:39:49.197712 ignition[778]: Ignition 2.20.0 May 14 23:39:49.197723 ignition[778]: Stage: disks May 14 23:39:49.197893 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 14 23:39:49.197903 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:49.198771 ignition[778]: disks: disks passed May 14 23:39:49.201523 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:39:49.198822 ignition[778]: Ignition finished successfully May 14 23:39:49.202701 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:39:49.204382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:39:49.206103 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:39:49.207924 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:39:49.209824 systemd[1]: Reached target basic.target - Basic System. May 14 23:39:49.225635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:39:49.235562 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:39:49.238847 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:39:49.250590 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:39:49.292477 kernel: EXT4-fs (vda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:39:49.293312 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:39:49.294645 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:39:49.311562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:39:49.313339 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:39:49.314896 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:39:49.314987 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:39:49.315018 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:39:49.329073 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 14 23:39:49.329097 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:39:49.329107 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:39:49.329125 kernel: BTRFS info (device vda6): using free space tree May 14 23:39:49.319644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:39:49.332016 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:39:49.322282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:39:49.333963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:39:49.369513 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:39:49.373948 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 14 23:39:49.378505 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:39:49.381800 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:39:49.461861 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:39:49.473583 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:39:49.476039 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:39:49.481468 kernel: BTRFS info (device vda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:39:49.495182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:39:49.499122 ignition[910]: INFO : Ignition 2.20.0 May 14 23:39:49.499122 ignition[910]: INFO : Stage: mount May 14 23:39:49.501623 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:39:49.501623 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:49.501623 ignition[910]: INFO : mount: mount passed May 14 23:39:49.501623 ignition[910]: INFO : Ignition finished successfully May 14 23:39:49.503492 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:39:49.513614 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:39:49.907214 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:39:49.919652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:39:49.925464 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) May 14 23:39:49.927703 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:39:49.927720 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:39:49.927730 kernel: BTRFS info (device vda6): using free space tree May 14 23:39:49.930465 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:39:49.931900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:39:49.954687 ignition[941]: INFO : Ignition 2.20.0 May 14 23:39:49.954687 ignition[941]: INFO : Stage: files May 14 23:39:49.956474 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:39:49.956474 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:49.956474 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 14 23:39:49.960396 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:39:49.960396 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:39:49.960396 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:39:49.960396 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:39:49.960396 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:39:49.960396 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:39:49.959334 unknown[941]: wrote ssh authorized keys file for user: core May 14 23:39:49.970552 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 23:39:50.017078 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:39:50.281285 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:39:50.281285 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:39:50.285408 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 23:39:50.697380 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:39:51.062560 systemd-networkd[760]: eth0: Gained IPv6LL May 14 23:39:51.542616 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:39:51.542616 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 23:39:51.547667 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:39:51.565956 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:39:51.569550 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:39:51.571363 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:39:51.571363 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 23:39:51.571363 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:39:51.571363 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:39:51.571363 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:39:51.571363 ignition[941]: INFO : files: files passed May 14 23:39:51.571363 ignition[941]: INFO : Ignition finished successfully May 14 23:39:51.571602 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:39:51.582634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:39:51.585349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:39:51.587741 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:39:51.587827 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:39:51.603311 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:39:51.606510 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:39:51.606510 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:39:51.610117 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:39:51.611532 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:39:51.613395 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:39:51.623625 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:39:51.649425 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:39:51.649577 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:39:51.652039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:39:51.654256 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:39:51.656336 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:39:51.657930 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:39:51.675415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:39:51.684662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:39:51.693541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:39:51.694867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:39:51.697090 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:39:51.699068 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:39:51.699224 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:39:51.701924 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:39:51.704239 systemd[1]: Stopped target basic.target - Basic System. May 14 23:39:51.706043 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:39:51.707934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:39:51.710075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:39:51.712204 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:39:51.714192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:39:51.716881 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:39:51.719035 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:39:51.721016 systemd[1]: Stopped target swap.target - Swaps. May 14 23:39:51.722676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:39:51.722835 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:39:51.725511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:39:51.727647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:39:51.729682 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:39:51.729796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:39:51.731908 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:39:51.732038 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:39:51.735045 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:39:51.735178 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:39:51.737240 systemd[1]: Stopped target paths.target - Path Units. May 14 23:39:51.738919 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:39:51.739057 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:39:51.740954 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:39:51.742716 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:39:51.744331 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:39:51.744436 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:39:51.746318 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:39:51.746407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:39:51.748714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:39:51.748852 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:39:51.750782 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:39:51.750894 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:39:51.764656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:39:51.765661 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:39:51.765811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:39:51.768947 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:39:51.770928 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:39:51.771090 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:39:51.773090 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:39:51.773217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:39:51.782476 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:39:51.785224 ignition[998]: INFO : Ignition 2.20.0 May 14 23:39:51.785224 ignition[998]: INFO : Stage: umount May 14 23:39:51.785224 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:39:51.785224 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:39:51.785224 ignition[998]: INFO : umount: umount passed May 14 23:39:51.785224 ignition[998]: INFO : Ignition finished successfully May 14 23:39:51.782626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:39:51.786877 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:39:51.787068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:39:51.790656 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:39:51.791514 systemd[1]: Stopped target network.target - Network. May 14 23:39:51.796313 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:39:51.796409 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:39:51.798560 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:39:51.798625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:39:51.800820 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:39:51.800871 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:39:51.802762 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:39:51.802808 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:39:51.805596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:39:51.807526 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:39:51.812330 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:39:51.812436 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:39:51.816991 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:39:51.817501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:39:51.817546 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:39:51.820659 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:39:51.820872 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:39:51.821011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:39:51.824077 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:39:51.824641 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:39:51.824743 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:39:51.841563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:39:51.843419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:39:51.843510 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:39:51.845704 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:39:51.845760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:39:51.850552 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:39:51.850606 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:39:51.851790 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:39:51.855027 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:39:51.855396 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:39:51.855519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:39:51.858115 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:39:51.858216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:39:51.864638 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:39:51.864752 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:39:51.866707 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:39:51.866841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:39:51.870079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:39:51.870129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:39:51.871949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:39:51.871986 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:39:51.873736 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:39:51.873787 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:39:51.876668 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:39:51.876721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:39:51.879673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:39:51.879724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:39:51.900676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:39:51.901806 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:39:51.901877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:39:51.905293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:39:51.905343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:39:51.909843 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:39:51.909923 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:39:51.912286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:39:51.926643 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:39:51.934793 systemd[1]: Switching root. May 14 23:39:51.955915 systemd-journald[239]: Journal stopped May 14 23:39:52.859816 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 14 23:39:52.859877 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:39:52.859889 kernel: SELinux: policy capability open_perms=1 May 14 23:39:52.859899 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:39:52.859908 kernel: SELinux: policy capability always_check_network=0 May 14 23:39:52.859917 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:39:52.859929 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:39:52.859940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:39:52.859949 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:39:52.859959 kernel: audit: type=1403 audit(1747265992.109:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:39:52.859969 systemd[1]: Successfully loaded SELinux policy in 35.256ms. May 14 23:39:52.859990 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.756ms. May 14 23:39:52.860002 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:39:52.860014 systemd[1]: Detected virtualization kvm. May 14 23:39:52.860024 systemd[1]: Detected architecture arm64. May 14 23:39:52.860037 systemd[1]: Detected first boot. May 14 23:39:52.860047 systemd[1]: Initializing machine ID from VM UUID. May 14 23:39:52.860058 zram_generator::config[1045]: No configuration found. May 14 23:39:52.860074 kernel: NET: Registered PF_VSOCK protocol family May 14 23:39:52.860083 systemd[1]: Populated /etc with preset unit settings. May 14 23:39:52.860095 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:39:52.860105 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:39:52.860115 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:39:52.860126 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:39:52.860138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:39:52.860148 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:39:52.860158 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:39:52.860170 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:39:52.860180 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:39:52.860199 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:39:52.860211 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:39:52.860222 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:39:52.860235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:39:52.860245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:39:52.860256 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:39:52.860266 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:39:52.860277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:39:52.860289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:39:52.860300 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:39:52.860310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:39:52.860321 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:39:52.860346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:39:52.860356 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:39:52.860370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:39:52.860380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:39:52.860390 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:39:52.860400 systemd[1]: Reached target slices.target - Slice Units. May 14 23:39:52.860410 systemd[1]: Reached target swap.target - Swaps. May 14 23:39:52.860422 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:39:52.860434 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:39:52.860444 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:39:52.860464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:39:52.860476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:39:52.860486 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:39:52.860497 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:39:52.860507 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:39:52.860517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:39:52.860527 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:39:52.860539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:39:52.860550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:39:52.860560 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:39:52.860571 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:39:52.860581 systemd[1]: Reached target machines.target - Containers. May 14 23:39:52.860591 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:39:52.860601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:39:52.860612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:39:52.860622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:39:52.860634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:39:52.860644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:39:52.860656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:39:52.860666 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:39:52.860676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:39:52.860686 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:39:52.860696 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:39:52.860707 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:39:52.860719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:39:52.860728 kernel: fuse: init (API version 7.39) May 14 23:39:52.860738 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:39:52.860749 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:39:52.860759 kernel: loop: module loaded May 14 23:39:52.860769 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:39:52.860779 kernel: ACPI: bus type drm_connector registered May 14 23:39:52.860788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:39:52.860798 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:39:52.860810 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:39:52.860820 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:39:52.860830 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:39:52.860862 systemd-journald[1117]: Collecting audit messages is disabled. May 14 23:39:52.860882 systemd-journald[1117]: Journal started May 14 23:39:52.860903 systemd-journald[1117]: Runtime Journal (/run/log/journal/ca2871eeec234147a17b431c7edae834) is 5.9M, max 47.3M, 41.4M free. May 14 23:39:52.631572 systemd[1]: Queued start job for default target multi-user.target. May 14 23:39:52.647564 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:39:52.648027 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:39:52.865137 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:39:52.865194 systemd[1]: Stopped verity-setup.service. May 14 23:39:52.871554 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:39:52.872227 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:39:52.873645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:39:52.875036 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:39:52.876213 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:39:52.877537 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:39:52.878925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:39:52.880271 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:39:52.881855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:39:52.883424 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:39:52.883690 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:39:52.885244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:39:52.885439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:39:52.888892 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:39:52.889089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:39:52.890739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:39:52.890903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:39:52.892684 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:39:52.892846 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:39:52.894336 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:39:52.894546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:39:52.896204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:39:52.897888 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:39:52.899603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:39:52.901439 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:39:52.915429 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:39:52.927609 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:39:52.929908 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:39:52.931159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:39:52.931216 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:39:52.933180 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:39:52.935600 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:39:52.937812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:39:52.938957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:39:52.941267 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:39:52.943740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:39:52.945008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:39:52.946276 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:39:52.950566 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:39:52.953277 systemd-journald[1117]: Time spent on flushing to /var/log/journal/ca2871eeec234147a17b431c7edae834 is 18.888ms for 864 entries. May 14 23:39:52.953277 systemd-journald[1117]: System Journal (/var/log/journal/ca2871eeec234147a17b431c7edae834) is 8M, max 195.6M, 187.6M free. May 14 23:39:52.987908 systemd-journald[1117]: Received client request to flush runtime journal. May 14 23:39:52.987973 kernel: loop0: detected capacity change from 0 to 201592 May 14 23:39:52.953919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:39:52.957944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:39:52.966014 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:39:52.971401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:39:52.973726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:39:52.975031 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:39:52.979871 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:39:52.982299 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:39:52.987416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:39:52.989832 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:39:52.993850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:39:53.000768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:39:53.005693 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:39:53.008580 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:39:53.022562 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:39:53.024620 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:39:53.026841 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:39:53.028520 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:39:53.031777 kernel: loop1: detected capacity change from 0 to 123192 May 14 23:39:53.042694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:39:53.070495 kernel: loop2: detected capacity change from 0 to 113512 May 14 23:39:53.075318 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 14 23:39:53.075333 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 14 23:39:53.080682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:39:53.114484 kernel: loop3: detected capacity change from 0 to 201592 May 14 23:39:53.127499 kernel: loop4: detected capacity change from 0 to 123192 May 14 23:39:53.139732 kernel: loop5: detected capacity change from 0 to 113512 May 14 23:39:53.144969 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:39:53.146036 (sd-merge)[1187]: Merged extensions into '/usr'. May 14 23:39:53.151659 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:39:53.151678 systemd[1]: Reloading... May 14 23:39:53.227485 zram_generator::config[1214]: No configuration found. May 14 23:39:53.309999 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:39:53.333036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:39:53.384950 systemd[1]: Reloading finished in 232 ms. May 14 23:39:53.404644 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:39:53.406230 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:39:53.426648 systemd[1]: Starting ensure-sysext.service... May 14 23:39:53.433163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:39:53.454419 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:39:53.454653 systemd[1]: Reload requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... May 14 23:39:53.454666 systemd[1]: Reloading... May 14 23:39:53.455058 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:39:53.455842 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:39:53.456160 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 14 23:39:53.456327 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 14 23:39:53.459612 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:39:53.459799 systemd-tmpfiles[1251]: Skipping /boot May 14 23:39:53.474636 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:39:53.474762 systemd-tmpfiles[1251]: Skipping /boot May 14 23:39:53.510496 zram_generator::config[1280]: No configuration found. May 14 23:39:53.592208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:39:53.643200 systemd[1]: Reloading finished in 188 ms. May 14 23:39:53.656147 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:39:53.668180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:39:53.677230 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:39:53.680283 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:39:53.682729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:39:53.685926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:39:53.688972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:39:53.691931 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:39:53.696961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:39:53.701543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:39:53.706537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:39:53.710508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:39:53.713618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:39:53.713765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:39:53.714888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:39:53.715082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:39:53.725068 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:39:53.726872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:39:53.728500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:39:53.730262 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:39:53.730428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:39:53.738414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:39:53.748163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:39:53.749813 systemd-udevd[1320]: Using default interface naming scheme 'v255'. May 14 23:39:53.750954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:39:53.758783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:39:53.762296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:39:53.762524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:39:53.764279 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:39:53.766823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:39:53.781936 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:39:53.784038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:39:53.784345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:39:53.786209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:39:53.786378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:39:53.800582 augenrules[1356]: No rules May 14 23:39:53.802774 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:39:53.802976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:39:53.804664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:39:53.808778 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:39:53.808988 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:39:53.812273 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:39:53.825346 systemd[1]: Finished ensure-sysext.service. May 14 23:39:53.835739 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:39:53.837510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:39:53.838873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:39:53.843978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:39:53.859670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:39:53.863034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:39:53.865359 augenrules[1388]: /sbin/augenrules: No change May 14 23:39:53.866107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:39:53.866192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:39:53.870328 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:39:53.873363 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:39:53.878810 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:39:53.880676 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:39:53.881579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:39:53.881786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:39:53.883038 augenrules[1412]: No rules May 14 23:39:53.884018 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:39:53.884239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:39:53.885980 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:39:53.887587 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:39:53.888793 systemd-resolved[1319]: Positive Trust Anchors: May 14 23:39:53.888814 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:39:53.888846 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:39:53.889093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:39:53.889334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:39:53.890976 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:39:53.891234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:39:53.898931 systemd-resolved[1319]: Defaulting to hostname 'linux'. May 14 23:39:53.901860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:39:53.909098 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:39:53.910526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:39:53.910588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:39:53.918852 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:39:53.941222 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:39:53.949483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1378) May 14 23:39:53.992601 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:39:53.994622 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:39:54.011486 systemd-networkd[1406]: lo: Link UP May 14 23:39:54.011811 systemd-networkd[1406]: lo: Gained carrier May 14 23:39:54.016163 systemd-networkd[1406]: Enumeration completed May 14 23:39:54.016623 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:39:54.018062 systemd[1]: Reached target network.target - Network. May 14 23:39:54.023403 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:39:54.023528 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:39:54.027671 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:39:54.030677 systemd-networkd[1406]: eth0: Link UP May 14 23:39:54.030908 systemd-networkd[1406]: eth0: Gained carrier May 14 23:39:54.030977 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:39:54.031688 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:39:54.039792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:39:54.043134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:39:54.047560 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:39:54.054572 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. May 14 23:39:54.058533 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:39:54.058585 systemd-timesyncd[1410]: Initial clock synchronization to Wed 2025-05-14 23:39:53.938635 UTC. May 14 23:39:54.062567 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:39:54.072778 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:39:54.076397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:39:54.092644 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:39:54.094947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:39:54.109511 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:39:54.153898 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:39:54.158625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:39:54.172703 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:39:54.177427 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:39:54.181087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:39:54.182615 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:39:54.183855 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:39:54.185376 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:39:54.187004 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:39:54.188227 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:39:54.189565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:39:54.190835 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:39:54.190889 systemd[1]: Reached target paths.target - Path Units. May 14 23:39:54.191840 systemd[1]: Reached target timers.target - Timer Units. May 14 23:39:54.193845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:39:54.196525 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:39:54.199860 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:39:54.201469 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:39:54.202748 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:39:54.207606 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:39:54.209223 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:39:54.212524 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:39:54.214077 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:39:54.216292 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:39:54.217482 systemd[1]: Reached target basic.target - Basic System. May 14 23:39:54.218815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:39:54.218877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:39:54.226592 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:39:54.228936 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:39:54.231172 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:39:54.233576 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:39:54.234753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:39:54.236116 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:39:54.241668 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:39:54.245826 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:39:54.252732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:39:54.255668 jq[1455]: false May 14 23:39:54.259953 extend-filesystems[1456]: Found loop3 May 14 23:39:54.259953 extend-filesystems[1456]: Found loop4 May 14 23:39:54.259953 extend-filesystems[1456]: Found loop5 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda May 14 23:39:54.259953 extend-filesystems[1456]: Found vda1 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda2 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda3 May 14 23:39:54.259953 extend-filesystems[1456]: Found usr May 14 23:39:54.259953 extend-filesystems[1456]: Found vda4 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda6 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda7 May 14 23:39:54.259953 extend-filesystems[1456]: Found vda9 May 14 23:39:54.259953 extend-filesystems[1456]: Checking size of /dev/vda9 May 14 23:39:54.259556 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:39:54.277404 dbus-daemon[1454]: [system] SELinux support is enabled May 14 23:39:54.262178 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:39:54.262796 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:39:54.268732 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:39:54.274361 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:39:54.290642 jq[1471]: true May 14 23:39:54.277616 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:39:54.283656 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:39:54.285505 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:39:54.289816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:39:54.290025 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:39:54.302725 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:39:54.306199 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:39:54.306240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:39:54.310608 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:39:54.310638 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:39:54.317018 extend-filesystems[1456]: Resized partition /dev/vda9 May 14 23:39:54.320491 jq[1479]: true May 14 23:39:54.324018 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1375) May 14 23:39:54.336100 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:39:54.336375 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:39:54.341241 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) May 14 23:39:54.354555 tar[1478]: linux-arm64/LICENSE May 14 23:39:54.354555 tar[1478]: linux-arm64/helm May 14 23:39:54.360604 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:39:54.373795 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:39:54.375043 update_engine[1464]: I20250514 23:39:54.372126 1464 main.cc:92] Flatcar Update Engine starting May 14 23:39:54.376496 systemd-logind[1462]: New seat seat0. May 14 23:39:54.379154 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:39:54.384073 update_engine[1464]: I20250514 23:39:54.383772 1464 update_check_scheduler.cc:74] Next update check in 8m46s May 14 23:39:54.384292 systemd[1]: Started update-engine.service - Update Engine. May 14 23:39:54.400832 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:39:54.419486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:39:54.456112 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:39:54.456112 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:39:54.456112 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:39:54.464558 extend-filesystems[1456]: Resized filesystem in /dev/vda9 May 14 23:39:54.467179 bash[1506]: Updated "/home/core/.ssh/authorized_keys" May 14 23:39:54.457141 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:39:54.457402 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:39:54.465530 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:39:54.469727 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:39:54.509121 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:39:54.614202 containerd[1480]: time="2025-05-14T23:39:54.614036400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:39:54.638198 containerd[1480]: time="2025-05-14T23:39:54.638133320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.639756 containerd[1480]: time="2025-05-14T23:39:54.639714280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:39:54.640037 containerd[1480]: time="2025-05-14T23:39:54.639934960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:39:54.640037 containerd[1480]: time="2025-05-14T23:39:54.639964200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640316960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640346920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640421840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640434800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640712720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640729120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640743240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640751840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.640874 containerd[1480]: time="2025-05-14T23:39:54.640827760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.641096 containerd[1480]: time="2025-05-14T23:39:54.641021040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:39:54.641176 containerd[1480]: time="2025-05-14T23:39:54.641145600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:39:54.641209 containerd[1480]: time="2025-05-14T23:39:54.641174520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:39:54.642047 containerd[1480]: time="2025-05-14T23:39:54.642001520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:39:54.642486 containerd[1480]: time="2025-05-14T23:39:54.642100600Z" level=info msg="metadata content store policy set" policy=shared May 14 23:39:54.653498 containerd[1480]: time="2025-05-14T23:39:54.653438280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:39:54.653626 containerd[1480]: time="2025-05-14T23:39:54.653518240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:39:54.653626 containerd[1480]: time="2025-05-14T23:39:54.653556800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:39:54.653626 containerd[1480]: time="2025-05-14T23:39:54.653575400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:39:54.653626 containerd[1480]: time="2025-05-14T23:39:54.653592920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.653771440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654051800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654149360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654164920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654178880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654207440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654220760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654233360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654259520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654276120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654289840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654303480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654315600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:39:54.655471 containerd[1480]: time="2025-05-14T23:39:54.654336520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654350120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654362520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654374360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654387400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654399960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654412000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654425280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654437520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654472000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654486600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654498720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654511280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654549360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654572440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655733 containerd[1480]: time="2025-05-14T23:39:54.654585240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.654596200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655783800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655882120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655897000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655912320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655922120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655936320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655946680Z" level=info msg="NRI interface is disabled by configuration." May 14 23:39:54.655974 containerd[1480]: time="2025-05-14T23:39:54.655957440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:39:54.656299 containerd[1480]: time="2025-05-14T23:39:54.656246840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:39:54.656427 containerd[1480]: time="2025-05-14T23:39:54.656303200Z" level=info msg="Connect containerd service" May 14 23:39:54.656427 containerd[1480]: time="2025-05-14T23:39:54.656344600Z" level=info msg="using legacy CRI server" May 14 23:39:54.656427 containerd[1480]: time="2025-05-14T23:39:54.656351800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:39:54.656898 containerd[1480]: time="2025-05-14T23:39:54.656874960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:39:54.657825 containerd[1480]: time="2025-05-14T23:39:54.657791800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:39:54.658141 containerd[1480]: time="2025-05-14T23:39:54.658110800Z" level=info msg="Start subscribing containerd event" May 14 23:39:54.658171 containerd[1480]: time="2025-05-14T23:39:54.658157280Z" level=info msg="Start recovering state" May 14 23:39:54.658257 containerd[1480]: time="2025-05-14T23:39:54.658240720Z" level=info msg="Start event monitor" May 14 23:39:54.658305 containerd[1480]: time="2025-05-14T23:39:54.658258080Z" level=info msg="Start snapshots syncer" May 14 23:39:54.658305 containerd[1480]: time="2025-05-14T23:39:54.658267880Z" level=info msg="Start cni network conf syncer for default" May 14 23:39:54.658305 containerd[1480]: time="2025-05-14T23:39:54.658276320Z" level=info msg="Start streaming server" May 14 23:39:54.659774 containerd[1480]: time="2025-05-14T23:39:54.659753480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:39:54.659813 containerd[1480]: time="2025-05-14T23:39:54.659799840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:39:54.659864 containerd[1480]: time="2025-05-14T23:39:54.659848720Z" level=info msg="containerd successfully booted in 0.050451s" May 14 23:39:54.659974 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:39:54.745025 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:39:54.770019 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:39:54.780094 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:39:54.786209 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:39:54.787526 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:39:54.791091 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:39:54.806523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:39:54.817831 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:39:54.821052 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:39:54.822516 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:39:54.839751 tar[1478]: linux-arm64/README.md May 14 23:39:54.853447 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:39:55.542591 systemd-networkd[1406]: eth0: Gained IPv6LL May 14 23:39:55.545207 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:39:55.547244 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:39:55.560785 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:39:55.563506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:39:55.566016 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:39:55.591818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:39:55.594253 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:39:55.594469 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:39:55.596738 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:39:56.186483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:39:56.188536 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:39:56.193364 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:39:56.196620 systemd[1]: Startup finished in 560ms (kernel) + 5.410s (initrd) + 4.122s (userspace) = 10.093s. May 14 23:39:56.666584 kubelet[1567]: E0514 23:39:56.666480 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:39:56.668731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:39:56.668910 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:39:56.670599 systemd[1]: kubelet.service: Consumed 801ms CPU time, 250.3M memory peak. May 14 23:39:59.413970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:39:59.415195 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:48836.service - OpenSSH per-connection server daemon (10.0.0.1:48836). May 14 23:39:59.492180 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 48836 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:39:59.493945 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:39:59.505094 systemd-logind[1462]: New session 1 of user core. May 14 23:39:59.506160 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:39:59.521801 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:39:59.532694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:39:59.535679 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:39:59.543175 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:39:59.545732 systemd-logind[1462]: New session c1 of user core. May 14 23:39:59.676360 systemd[1585]: Queued start job for default target default.target. May 14 23:39:59.688580 systemd[1585]: Created slice app.slice - User Application Slice. May 14 23:39:59.688614 systemd[1585]: Reached target paths.target - Paths. May 14 23:39:59.688658 systemd[1585]: Reached target timers.target - Timers. May 14 23:39:59.690181 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:39:59.701433 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:39:59.701598 systemd[1585]: Reached target sockets.target - Sockets. May 14 23:39:59.701669 systemd[1585]: Reached target basic.target - Basic System. May 14 23:39:59.701706 systemd[1585]: Reached target default.target - Main User Target. May 14 23:39:59.701737 systemd[1585]: Startup finished in 149ms. May 14 23:39:59.701888 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:39:59.703716 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:39:59.778820 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:48848.service - OpenSSH per-connection server daemon (10.0.0.1:48848). May 14 23:39:59.816047 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 48848 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:39:59.817551 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:39:59.821563 systemd-logind[1462]: New session 2 of user core. May 14 23:39:59.833639 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:39:59.884941 sshd[1598]: Connection closed by 10.0.0.1 port 48848 May 14 23:39:59.885473 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 14 23:39:59.895006 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:48848.service: Deactivated successfully. May 14 23:39:59.896624 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:39:59.898011 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. May 14 23:39:59.910782 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:48858.service - OpenSSH per-connection server daemon (10.0.0.1:48858). May 14 23:39:59.912127 systemd-logind[1462]: Removed session 2. May 14 23:39:59.947722 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 48858 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:39:59.948939 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:39:59.953514 systemd-logind[1462]: New session 3 of user core. May 14 23:39:59.960650 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:40:00.008622 sshd[1606]: Connection closed by 10.0.0.1 port 48858 May 14 23:40:00.009183 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 14 23:40:00.026745 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:48858.service: Deactivated successfully. May 14 23:40:00.029095 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:40:00.031110 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. May 14 23:40:00.041865 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:48870.service - OpenSSH per-connection server daemon (10.0.0.1:48870). May 14 23:40:00.043085 systemd-logind[1462]: Removed session 3. May 14 23:40:00.084534 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:40:00.085969 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:00.090526 systemd-logind[1462]: New session 4 of user core. May 14 23:40:00.100648 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:40:00.152506 sshd[1614]: Connection closed by 10.0.0.1 port 48870 May 14 23:40:00.152828 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 14 23:40:00.172099 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:48882.service - OpenSSH per-connection server daemon (10.0.0.1:48882). May 14 23:40:00.172610 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:48870.service: Deactivated successfully. May 14 23:40:00.174203 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:40:00.176799 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. May 14 23:40:00.178139 systemd-logind[1462]: Removed session 4. May 14 23:40:00.237400 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 48882 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:40:00.238731 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:00.244524 systemd-logind[1462]: New session 5 of user core. May 14 23:40:00.250713 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:40:00.321532 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:40:00.321847 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:40:00.687768 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:40:00.687902 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:40:00.931582 dockerd[1643]: time="2025-05-14T23:40:00.931510498Z" level=info msg="Starting up" May 14 23:40:01.068806 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3004800808-merged.mount: Deactivated successfully. May 14 23:40:01.087135 dockerd[1643]: time="2025-05-14T23:40:01.087081129Z" level=info msg="Loading containers: start." May 14 23:40:01.241504 kernel: Initializing XFRM netlink socket May 14 23:40:01.305138 systemd-networkd[1406]: docker0: Link UP May 14 23:40:01.342413 dockerd[1643]: time="2025-05-14T23:40:01.341710743Z" level=info msg="Loading containers: done." May 14 23:40:01.357991 dockerd[1643]: time="2025-05-14T23:40:01.357945471Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:40:01.358216 dockerd[1643]: time="2025-05-14T23:40:01.358198260Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:40:01.358436 dockerd[1643]: time="2025-05-14T23:40:01.358416783Z" level=info msg="Daemon has completed initialization" May 14 23:40:01.392208 dockerd[1643]: time="2025-05-14T23:40:01.392143975Z" level=info msg="API listen on /run/docker.sock" May 14 23:40:01.392333 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:40:02.245923 containerd[1480]: time="2025-05-14T23:40:02.245755225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 23:40:02.914970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891668693.mount: Deactivated successfully. May 14 23:40:04.377787 containerd[1480]: time="2025-05-14T23:40:04.377730936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:04.378753 containerd[1480]: time="2025-05-14T23:40:04.378689823Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 14 23:40:04.380008 containerd[1480]: time="2025-05-14T23:40:04.379971791Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:04.383117 containerd[1480]: time="2025-05-14T23:40:04.383074619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:04.384643 containerd[1480]: time="2025-05-14T23:40:04.384436013Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.138635877s" May 14 23:40:04.384643 containerd[1480]: time="2025-05-14T23:40:04.384493252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 23:40:04.385187 containerd[1480]: time="2025-05-14T23:40:04.385150846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 23:40:05.766057 containerd[1480]: time="2025-05-14T23:40:05.765996539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:05.767535 containerd[1480]: time="2025-05-14T23:40:05.767484871Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 14 23:40:05.769124 containerd[1480]: time="2025-05-14T23:40:05.769094685Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:05.772788 containerd[1480]: time="2025-05-14T23:40:05.772742504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:05.773890 containerd[1480]: time="2025-05-14T23:40:05.773858185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.388672443s" May 14 23:40:05.773931 containerd[1480]: time="2025-05-14T23:40:05.773893048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 23:40:05.774412 containerd[1480]: time="2025-05-14T23:40:05.774377777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 23:40:06.908745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:40:06.927731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:07.080756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:07.084183 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:40:07.133782 kubelet[1910]: E0514 23:40:07.133712 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:40:07.136656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:40:07.136798 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:40:07.137076 systemd[1]: kubelet.service: Consumed 151ms CPU time, 101.2M memory peak. May 14 23:40:07.373285 containerd[1480]: time="2025-05-14T23:40:07.373161342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:07.373830 containerd[1480]: time="2025-05-14T23:40:07.373785939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 14 23:40:07.375465 containerd[1480]: time="2025-05-14T23:40:07.375428411Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:07.379335 containerd[1480]: time="2025-05-14T23:40:07.379294302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:07.380390 containerd[1480]: time="2025-05-14T23:40:07.380322062Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.605784989s" May 14 23:40:07.380390 containerd[1480]: time="2025-05-14T23:40:07.380351600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 23:40:07.381062 containerd[1480]: time="2025-05-14T23:40:07.380775093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 23:40:08.709447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309561737.mount: Deactivated successfully. May 14 23:40:08.941742 containerd[1480]: time="2025-05-14T23:40:08.941684038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:08.942912 containerd[1480]: time="2025-05-14T23:40:08.942868473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 14 23:40:08.943990 containerd[1480]: time="2025-05-14T23:40:08.943920738Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:08.946366 containerd[1480]: time="2025-05-14T23:40:08.945986193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:08.947161 containerd[1480]: time="2025-05-14T23:40:08.947107672Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.566300688s" May 14 23:40:08.947161 containerd[1480]: time="2025-05-14T23:40:08.947148859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 23:40:08.947838 containerd[1480]: time="2025-05-14T23:40:08.947673316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 23:40:09.527149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173506673.mount: Deactivated successfully. May 14 23:40:10.548157 containerd[1480]: time="2025-05-14T23:40:10.548105863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:10.549992 containerd[1480]: time="2025-05-14T23:40:10.549925035Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 14 23:40:10.552534 containerd[1480]: time="2025-05-14T23:40:10.551265053Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:10.554451 containerd[1480]: time="2025-05-14T23:40:10.553831653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:10.555224 containerd[1480]: time="2025-05-14T23:40:10.555178811Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.607458683s" May 14 23:40:10.555224 containerd[1480]: time="2025-05-14T23:40:10.555214988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 23:40:10.555760 containerd[1480]: time="2025-05-14T23:40:10.555736182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:40:11.035013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429940129.mount: Deactivated successfully. May 14 23:40:11.040569 containerd[1480]: time="2025-05-14T23:40:11.040512667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:11.041282 containerd[1480]: time="2025-05-14T23:40:11.041182915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 23:40:11.041981 containerd[1480]: time="2025-05-14T23:40:11.041939852Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:11.044038 containerd[1480]: time="2025-05-14T23:40:11.043984067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:11.044810 containerd[1480]: time="2025-05-14T23:40:11.044768570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 489.000797ms" May 14 23:40:11.044810 containerd[1480]: time="2025-05-14T23:40:11.044804075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:40:11.045339 containerd[1480]: time="2025-05-14T23:40:11.045314192Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 23:40:11.653136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847445902.mount: Deactivated successfully. May 14 23:40:14.266511 containerd[1480]: time="2025-05-14T23:40:14.265607598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:14.268497 containerd[1480]: time="2025-05-14T23:40:14.268130842Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 14 23:40:14.274498 containerd[1480]: time="2025-05-14T23:40:14.274446418Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:14.284487 containerd[1480]: time="2025-05-14T23:40:14.284417984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:14.285834 containerd[1480]: time="2025-05-14T23:40:14.285789524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.240295568s" May 14 23:40:14.285834 containerd[1480]: time="2025-05-14T23:40:14.285829477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 23:40:17.158450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:40:17.167806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:17.352049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:17.356127 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:40:17.391270 kubelet[2068]: E0514 23:40:17.391147 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:40:17.393312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:40:17.393466 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:40:17.393933 systemd[1]: kubelet.service: Consumed 126ms CPU time, 102.6M memory peak. May 14 23:40:19.959764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:19.959907 systemd[1]: kubelet.service: Consumed 126ms CPU time, 102.6M memory peak. May 14 23:40:19.974752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:19.997849 systemd[1]: Reload requested from client PID 2084 ('systemctl') (unit session-5.scope)... May 14 23:40:19.997869 systemd[1]: Reloading... May 14 23:40:20.073512 zram_generator::config[2128]: No configuration found. May 14 23:40:20.247718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:40:20.323241 systemd[1]: Reloading finished in 324 ms. May 14 23:40:20.368745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:20.370775 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:40:20.371014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:20.371073 systemd[1]: kubelet.service: Consumed 89ms CPU time, 90.2M memory peak. May 14 23:40:20.373796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:20.477815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:20.482748 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:40:20.520915 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:40:20.520915 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:40:20.520915 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:40:20.521233 kubelet[2175]: I0514 23:40:20.521107 2175 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:40:21.442185 kubelet[2175]: I0514 23:40:21.442130 2175 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:40:21.442185 kubelet[2175]: I0514 23:40:21.442165 2175 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:40:21.442478 kubelet[2175]: I0514 23:40:21.442448 2175 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:40:21.495245 kubelet[2175]: E0514 23:40:21.495177 2175 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:21.496777 kubelet[2175]: I0514 23:40:21.496746 2175 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:40:21.509480 kubelet[2175]: E0514 23:40:21.509212 2175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:40:21.509480 kubelet[2175]: I0514 23:40:21.509248 2175 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:40:21.512612 kubelet[2175]: I0514 23:40:21.512591 2175 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:40:21.512847 kubelet[2175]: I0514 23:40:21.512820 2175 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:40:21.513013 kubelet[2175]: I0514 23:40:21.512848 2175 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:40:21.513094 kubelet[2175]: I0514 23:40:21.513085 2175 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:40:21.513094 kubelet[2175]: I0514 23:40:21.513093 2175 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:40:21.513302 kubelet[2175]: I0514 23:40:21.513287 2175 state_mem.go:36] "Initialized new in-memory state store" May 14 23:40:21.515732 kubelet[2175]: I0514 23:40:21.515704 2175 kubelet.go:446] "Attempting to sync node with API server" May 14 23:40:21.515781 kubelet[2175]: I0514 23:40:21.515736 2175 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:40:21.515781 kubelet[2175]: I0514 23:40:21.515758 2175 kubelet.go:352] "Adding apiserver pod source" May 14 23:40:21.515781 kubelet[2175]: I0514 23:40:21.515769 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:40:21.521731 kubelet[2175]: W0514 23:40:21.521671 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:21.522054 kubelet[2175]: E0514 23:40:21.521735 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:21.522686 kubelet[2175]: I0514 23:40:21.522533 2175 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:40:21.523700 kubelet[2175]: W0514 23:40:21.523642 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:21.523760 kubelet[2175]: E0514 23:40:21.523704 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:21.525505 kubelet[2175]: I0514 23:40:21.524140 2175 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:40:21.525505 kubelet[2175]: W0514 23:40:21.524273 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:40:21.525505 kubelet[2175]: I0514 23:40:21.525168 2175 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:40:21.525505 kubelet[2175]: I0514 23:40:21.525208 2175 server.go:1287] "Started kubelet" May 14 23:40:21.525859 kubelet[2175]: I0514 23:40:21.525824 2175 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:40:21.527880 kubelet[2175]: I0514 23:40:21.527814 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:40:21.528614 kubelet[2175]: I0514 23:40:21.528582 2175 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:40:21.528875 kubelet[2175]: I0514 23:40:21.528852 2175 server.go:490] "Adding debug handlers to kubelet server" May 14 23:40:21.530096 kubelet[2175]: I0514 23:40:21.530060 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:40:21.531248 kubelet[2175]: E0514 23:40:21.530876 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89306d1e535f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:40:21.525181279 +0000 UTC m=+1.039074500,LastTimestamp:2025-05-14 23:40:21.525181279 +0000 UTC m=+1.039074500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:40:21.531542 kubelet[2175]: I0514 23:40:21.531516 2175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:40:21.532613 kubelet[2175]: E0514 23:40:21.532578 2175 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:40:21.532685 kubelet[2175]: I0514 23:40:21.532631 2175 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:40:21.532886 kubelet[2175]: I0514 23:40:21.532863 2175 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:40:21.532946 kubelet[2175]: I0514 23:40:21.532938 2175 reconciler.go:26] "Reconciler: start to sync state" May 14 23:40:21.533328 kubelet[2175]: W0514 23:40:21.533285 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:21.533386 kubelet[2175]: E0514 23:40:21.533335 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:21.533468 kubelet[2175]: E0514 23:40:21.533412 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" May 14 23:40:21.533741 kubelet[2175]: I0514 23:40:21.533713 2175 factory.go:221] Registration of the systemd container factory successfully May 14 23:40:21.533837 kubelet[2175]: I0514 23:40:21.533815 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:40:21.534522 kubelet[2175]: E0514 23:40:21.534495 2175 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:40:21.534683 kubelet[2175]: I0514 23:40:21.534668 2175 factory.go:221] Registration of the containerd container factory successfully May 14 23:40:21.546407 kubelet[2175]: I0514 23:40:21.546380 2175 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:40:21.546407 kubelet[2175]: I0514 23:40:21.546400 2175 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:40:21.546407 kubelet[2175]: I0514 23:40:21.546427 2175 state_mem.go:36] "Initialized new in-memory state store" May 14 23:40:21.546968 kubelet[2175]: I0514 23:40:21.546926 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:40:21.548152 kubelet[2175]: I0514 23:40:21.548128 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:40:21.548152 kubelet[2175]: I0514 23:40:21.548161 2175 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:40:21.548226 kubelet[2175]: I0514 23:40:21.548180 2175 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:40:21.548226 kubelet[2175]: I0514 23:40:21.548188 2175 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:40:21.548263 kubelet[2175]: E0514 23:40:21.548236 2175 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:40:21.633484 kubelet[2175]: E0514 23:40:21.633422 2175 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:40:21.648660 kubelet[2175]: E0514 23:40:21.648627 2175 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:40:21.733958 kubelet[2175]: E0514 23:40:21.733844 2175 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:40:21.734296 kubelet[2175]: E0514 23:40:21.734267 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" May 14 23:40:21.738259 kubelet[2175]: I0514 23:40:21.738232 2175 policy_none.go:49] "None policy: Start" May 14 23:40:21.738259 kubelet[2175]: I0514 23:40:21.738262 2175 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:40:21.738259 kubelet[2175]: I0514 23:40:21.738275 2175 state_mem.go:35] "Initializing new in-memory state store" May 14 23:40:21.738398 kubelet[2175]: W0514 23:40:21.738312 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:21.738398 kubelet[2175]: E0514 23:40:21.738371 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:21.743103 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:40:21.757591 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:40:21.760550 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:40:21.768413 kubelet[2175]: I0514 23:40:21.768371 2175 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:40:21.768827 kubelet[2175]: I0514 23:40:21.768639 2175 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:40:21.768827 kubelet[2175]: I0514 23:40:21.768654 2175 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:40:21.768915 kubelet[2175]: I0514 23:40:21.768857 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:40:21.770198 kubelet[2175]: E0514 23:40:21.770150 2175 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:40:21.770288 kubelet[2175]: E0514 23:40:21.770208 2175 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:40:21.857629 systemd[1]: Created slice kubepods-burstable-pod5b4ce354d61382acc177b61c4e0b824a.slice - libcontainer container kubepods-burstable-pod5b4ce354d61382acc177b61c4e0b824a.slice. May 14 23:40:21.867160 kubelet[2175]: E0514 23:40:21.867109 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:21.869866 kubelet[2175]: I0514 23:40:21.869765 2175 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:40:21.869954 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 23:40:21.870493 kubelet[2175]: E0514 23:40:21.870282 2175 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" May 14 23:40:21.878621 kubelet[2175]: E0514 23:40:21.878599 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:21.882120 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 23:40:21.884258 kubelet[2175]: E0514 23:40:21.884081 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:21.934739 kubelet[2175]: I0514 23:40:21.934705 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:21.935162 kubelet[2175]: I0514 23:40:21.934976 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:21.935162 kubelet[2175]: I0514 23:40:21.935009 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:40:21.935162 kubelet[2175]: I0514 23:40:21.935035 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:21.935162 kubelet[2175]: I0514 23:40:21.935050 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:21.935162 kubelet[2175]: I0514 23:40:21.935065 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:21.935292 kubelet[2175]: I0514 23:40:21.935078 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:21.935292 kubelet[2175]: I0514 23:40:21.935094 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:21.935292 kubelet[2175]: I0514 23:40:21.935114 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:22.072520 kubelet[2175]: I0514 23:40:22.071829 2175 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:40:22.072520 kubelet[2175]: E0514 23:40:22.072378 2175 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" May 14 23:40:22.134939 kubelet[2175]: E0514 23:40:22.134892 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" May 14 23:40:22.168181 kubelet[2175]: E0514 23:40:22.168156 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.168911 containerd[1480]: time="2025-05-14T23:40:22.168872019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5b4ce354d61382acc177b61c4e0b824a,Namespace:kube-system,Attempt:0,}" May 14 23:40:22.180221 kubelet[2175]: E0514 23:40:22.180191 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.180708 containerd[1480]: time="2025-05-14T23:40:22.180664740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 23:40:22.185171 kubelet[2175]: E0514 23:40:22.185139 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.185805 containerd[1480]: time="2025-05-14T23:40:22.185669371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 23:40:22.442912 kubelet[2175]: W0514 23:40:22.442858 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:22.442912 kubelet[2175]: E0514 23:40:22.442920 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:22.460617 kubelet[2175]: W0514 23:40:22.460560 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:22.460617 kubelet[2175]: E0514 23:40:22.460620 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:22.474115 kubelet[2175]: I0514 23:40:22.474072 2175 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:40:22.474440 kubelet[2175]: E0514 23:40:22.474407 2175 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" May 14 23:40:22.698201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045196598.mount: Deactivated successfully. May 14 23:40:22.705086 containerd[1480]: time="2025-05-14T23:40:22.704693143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:40:22.705982 containerd[1480]: time="2025-05-14T23:40:22.705876270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 14 23:40:22.708519 containerd[1480]: time="2025-05-14T23:40:22.708445298Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:40:22.710726 containerd[1480]: time="2025-05-14T23:40:22.710606701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:40:22.711952 containerd[1480]: time="2025-05-14T23:40:22.711845115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:40:22.712678 containerd[1480]: time="2025-05-14T23:40:22.712647582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:40:22.713604 containerd[1480]: time="2025-05-14T23:40:22.713565937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:40:22.714580 containerd[1480]: time="2025-05-14T23:40:22.714543054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:40:22.717210 containerd[1480]: time="2025-05-14T23:40:22.716710250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.968294ms" May 14 23:40:22.717432 containerd[1480]: time="2025-05-14T23:40:22.717403460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.450468ms" May 14 23:40:22.720769 containerd[1480]: time="2025-05-14T23:40:22.720725499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.987815ms" May 14 23:40:22.857590 containerd[1480]: time="2025-05-14T23:40:22.856998428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:22.857590 containerd[1480]: time="2025-05-14T23:40:22.857066139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:22.857590 containerd[1480]: time="2025-05-14T23:40:22.857077285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.857590 containerd[1480]: time="2025-05-14T23:40:22.857149470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.858308 containerd[1480]: time="2025-05-14T23:40:22.858024961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:22.858308 containerd[1480]: time="2025-05-14T23:40:22.858086800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:22.858308 containerd[1480]: time="2025-05-14T23:40:22.858097825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.858308 containerd[1480]: time="2025-05-14T23:40:22.858167054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.860793 containerd[1480]: time="2025-05-14T23:40:22.860643044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:22.860793 containerd[1480]: time="2025-05-14T23:40:22.860702247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:22.860793 containerd[1480]: time="2025-05-14T23:40:22.860713711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.860955 containerd[1480]: time="2025-05-14T23:40:22.860784179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:22.877949 systemd[1]: Started cri-containerd-4afa973f1cf7937af79d4ae2373d79d979537ad621d9db1ffe25bf780fb0f54e.scope - libcontainer container 4afa973f1cf7937af79d4ae2373d79d979537ad621d9db1ffe25bf780fb0f54e. May 14 23:40:22.879062 systemd[1]: Started cri-containerd-e05a920e323a613a5bba43e6d75ba80de930061661c77101c52c19638e782e71.scope - libcontainer container e05a920e323a613a5bba43e6d75ba80de930061661c77101c52c19638e782e71. May 14 23:40:22.884311 systemd[1]: Started cri-containerd-4ed3f56e5bbc6f32437e353492f8781ae8024e130be2d7eafec194824ed0cc87.scope - libcontainer container 4ed3f56e5bbc6f32437e353492f8781ae8024e130be2d7eafec194824ed0cc87. May 14 23:40:22.912773 containerd[1480]: time="2025-05-14T23:40:22.912734150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"4afa973f1cf7937af79d4ae2373d79d979537ad621d9db1ffe25bf780fb0f54e\"" May 14 23:40:22.913957 kubelet[2175]: E0514 23:40:22.913932 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.916092 containerd[1480]: time="2025-05-14T23:40:22.915857570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5b4ce354d61382acc177b61c4e0b824a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05a920e323a613a5bba43e6d75ba80de930061661c77101c52c19638e782e71\"" May 14 23:40:22.916881 containerd[1480]: time="2025-05-14T23:40:22.916849947Z" level=info msg="CreateContainer within sandbox \"4afa973f1cf7937af79d4ae2373d79d979537ad621d9db1ffe25bf780fb0f54e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:40:22.917122 kubelet[2175]: E0514 23:40:22.917097 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.919433 containerd[1480]: time="2025-05-14T23:40:22.919306922Z" level=info msg="CreateContainer within sandbox \"e05a920e323a613a5bba43e6d75ba80de930061661c77101c52c19638e782e71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:40:22.926555 containerd[1480]: time="2025-05-14T23:40:22.926514142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ed3f56e5bbc6f32437e353492f8781ae8024e130be2d7eafec194824ed0cc87\"" May 14 23:40:22.927521 kubelet[2175]: E0514 23:40:22.927498 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:22.929984 containerd[1480]: time="2025-05-14T23:40:22.929949553Z" level=info msg="CreateContainer within sandbox \"4ed3f56e5bbc6f32437e353492f8781ae8024e130be2d7eafec194824ed0cc87\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:40:22.936280 kubelet[2175]: E0514 23:40:22.936235 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" May 14 23:40:22.937720 containerd[1480]: time="2025-05-14T23:40:22.937671657Z" level=info msg="CreateContainer within sandbox \"4afa973f1cf7937af79d4ae2373d79d979537ad621d9db1ffe25bf780fb0f54e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1172a84fe80932290899033b870385dc66aff116ff9b75938f154515abd1c2e2\"" May 14 23:40:22.938303 containerd[1480]: time="2025-05-14T23:40:22.938272508Z" level=info msg="StartContainer for \"1172a84fe80932290899033b870385dc66aff116ff9b75938f154515abd1c2e2\"" May 14 23:40:22.943449 containerd[1480]: time="2025-05-14T23:40:22.943386436Z" level=info msg="CreateContainer within sandbox \"e05a920e323a613a5bba43e6d75ba80de930061661c77101c52c19638e782e71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50e830ee73e264202683c2b0823694b1f31ff912b8cd4bcb08931f2085079827\"" May 14 23:40:22.944284 containerd[1480]: time="2025-05-14T23:40:22.944003226Z" level=info msg="StartContainer for \"50e830ee73e264202683c2b0823694b1f31ff912b8cd4bcb08931f2085079827\"" May 14 23:40:22.949586 containerd[1480]: time="2025-05-14T23:40:22.949483593Z" level=info msg="CreateContainer within sandbox \"4ed3f56e5bbc6f32437e353492f8781ae8024e130be2d7eafec194824ed0cc87\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee895b3069601a99b4aad6f61001bbc564d65c3472ffb416cb606550f90dbe5f\"" May 14 23:40:22.950646 containerd[1480]: time="2025-05-14T23:40:22.950446089Z" level=info msg="StartContainer for \"ee895b3069601a99b4aad6f61001bbc564d65c3472ffb416cb606550f90dbe5f\"" May 14 23:40:22.969682 systemd[1]: Started cri-containerd-1172a84fe80932290899033b870385dc66aff116ff9b75938f154515abd1c2e2.scope - libcontainer container 1172a84fe80932290899033b870385dc66aff116ff9b75938f154515abd1c2e2. May 14 23:40:22.974145 systemd[1]: Started cri-containerd-50e830ee73e264202683c2b0823694b1f31ff912b8cd4bcb08931f2085079827.scope - libcontainer container 50e830ee73e264202683c2b0823694b1f31ff912b8cd4bcb08931f2085079827. May 14 23:40:22.975431 systemd[1]: Started cri-containerd-ee895b3069601a99b4aad6f61001bbc564d65c3472ffb416cb606550f90dbe5f.scope - libcontainer container ee895b3069601a99b4aad6f61001bbc564d65c3472ffb416cb606550f90dbe5f. May 14 23:40:23.020370 containerd[1480]: time="2025-05-14T23:40:23.020218849Z" level=info msg="StartContainer for \"1172a84fe80932290899033b870385dc66aff116ff9b75938f154515abd1c2e2\" returns successfully" May 14 23:40:23.020788 containerd[1480]: time="2025-05-14T23:40:23.020747758Z" level=info msg="StartContainer for \"ee895b3069601a99b4aad6f61001bbc564d65c3472ffb416cb606550f90dbe5f\" returns successfully" May 14 23:40:23.020984 containerd[1480]: time="2025-05-14T23:40:23.020931812Z" level=info msg="StartContainer for \"50e830ee73e264202683c2b0823694b1f31ff912b8cd4bcb08931f2085079827\" returns successfully" May 14 23:40:23.071698 kubelet[2175]: W0514 23:40:23.071595 2175 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused May 14 23:40:23.071698 kubelet[2175]: E0514 23:40:23.071664 2175 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" May 14 23:40:23.276376 kubelet[2175]: I0514 23:40:23.276274 2175 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:40:23.556828 kubelet[2175]: E0514 23:40:23.556712 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:23.556932 kubelet[2175]: E0514 23:40:23.556865 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:23.557260 kubelet[2175]: E0514 23:40:23.557234 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:23.557356 kubelet[2175]: E0514 23:40:23.557338 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:23.560659 kubelet[2175]: E0514 23:40:23.560621 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:23.560758 kubelet[2175]: E0514 23:40:23.560736 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:24.563023 kubelet[2175]: E0514 23:40:24.562818 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:24.563023 kubelet[2175]: E0514 23:40:24.562890 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:24.563023 kubelet[2175]: E0514 23:40:24.562959 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:24.563023 kubelet[2175]: E0514 23:40:24.563000 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:24.563968 kubelet[2175]: E0514 23:40:24.563945 2175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:40:24.564078 kubelet[2175]: E0514 23:40:24.564058 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:24.721948 kubelet[2175]: E0514 23:40:24.721908 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 23:40:24.845770 kubelet[2175]: I0514 23:40:24.845345 2175 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:40:24.845770 kubelet[2175]: E0514 23:40:24.845417 2175 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 23:40:24.933688 kubelet[2175]: I0514 23:40:24.933638 2175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:40:24.942661 kubelet[2175]: E0514 23:40:24.942602 2175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 23:40:24.942661 kubelet[2175]: I0514 23:40:24.942636 2175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:40:24.944655 kubelet[2175]: E0514 23:40:24.944623 2175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 14 23:40:24.944655 kubelet[2175]: I0514 23:40:24.944650 2175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:40:24.946293 kubelet[2175]: E0514 23:40:24.946248 2175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 14 23:40:25.518490 kubelet[2175]: I0514 23:40:25.518395 2175 apiserver.go:52] "Watching apiserver" May 14 23:40:25.533181 kubelet[2175]: I0514 23:40:25.533128 2175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:40:26.016031 kubelet[2175]: I0514 23:40:26.015998 2175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:40:26.021729 kubelet[2175]: E0514 23:40:26.021678 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:26.564751 kubelet[2175]: E0514 23:40:26.564706 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:26.963663 systemd[1]: Reload requested from client PID 2455 ('systemctl') (unit session-5.scope)... May 14 23:40:26.963680 systemd[1]: Reloading... May 14 23:40:27.050515 zram_generator::config[2499]: No configuration found. May 14 23:40:27.149833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:40:27.234712 systemd[1]: Reloading finished in 270 ms. May 14 23:40:27.253315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:27.275430 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:40:27.276567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:27.276638 systemd[1]: kubelet.service: Consumed 1.449s CPU time, 125.3M memory peak. May 14 23:40:27.285769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:40:27.398998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:40:27.403364 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:40:27.445007 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:40:27.445007 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:40:27.445007 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:40:27.445329 kubelet[2541]: I0514 23:40:27.445059 2541 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:40:27.454507 kubelet[2541]: I0514 23:40:27.453936 2541 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:40:27.454507 kubelet[2541]: I0514 23:40:27.453971 2541 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:40:27.454507 kubelet[2541]: I0514 23:40:27.454241 2541 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:40:27.455647 kubelet[2541]: I0514 23:40:27.455610 2541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:40:27.458000 kubelet[2541]: I0514 23:40:27.457962 2541 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:40:27.461279 kubelet[2541]: E0514 23:40:27.461250 2541 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:40:27.461279 kubelet[2541]: I0514 23:40:27.461283 2541 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:40:27.463974 kubelet[2541]: I0514 23:40:27.463842 2541 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:40:27.464129 kubelet[2541]: I0514 23:40:27.464081 2541 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:40:27.467587 kubelet[2541]: I0514 23:40:27.464115 2541 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:40:27.467587 kubelet[2541]: I0514 23:40:27.464307 2541 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:40:27.467587 kubelet[2541]: I0514 23:40:27.464316 2541 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:40:27.467587 kubelet[2541]: I0514 23:40:27.464363 2541 state_mem.go:36] "Initialized new in-memory state store" May 14 23:40:27.467587 kubelet[2541]: I0514 23:40:27.464537 2541 kubelet.go:446] "Attempting to sync node with API server" May 14 23:40:27.468188 kubelet[2541]: I0514 23:40:27.464560 2541 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:40:27.468188 kubelet[2541]: I0514 23:40:27.464581 2541 kubelet.go:352] "Adding apiserver pod source" May 14 23:40:27.468188 kubelet[2541]: I0514 23:40:27.464590 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:40:27.469061 kubelet[2541]: I0514 23:40:27.468683 2541 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:40:27.469346 kubelet[2541]: I0514 23:40:27.469324 2541 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:40:27.470485 kubelet[2541]: I0514 23:40:27.469816 2541 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:40:27.470485 kubelet[2541]: I0514 23:40:27.469853 2541 server.go:1287] "Started kubelet" May 14 23:40:27.471215 kubelet[2541]: I0514 23:40:27.471170 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:40:27.471448 kubelet[2541]: I0514 23:40:27.471427 2541 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:40:27.472134 kubelet[2541]: I0514 23:40:27.471514 2541 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:40:27.472134 kubelet[2541]: I0514 23:40:27.472068 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:40:27.473230 kubelet[2541]: I0514 23:40:27.473199 2541 server.go:490] "Adding debug handlers to kubelet server" May 14 23:40:27.475385 kubelet[2541]: I0514 23:40:27.475334 2541 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:40:27.475447 kubelet[2541]: I0514 23:40:27.475434 2541 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:40:27.478665 kubelet[2541]: I0514 23:40:27.478635 2541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:40:27.480632 kubelet[2541]: I0514 23:40:27.480321 2541 factory.go:221] Registration of the systemd container factory successfully May 14 23:40:27.480632 kubelet[2541]: I0514 23:40:27.480431 2541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:40:27.480757 kubelet[2541]: I0514 23:40:27.480685 2541 reconciler.go:26] "Reconciler: start to sync state" May 14 23:40:27.480937 kubelet[2541]: E0514 23:40:27.480906 2541 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:40:27.484031 kubelet[2541]: I0514 23:40:27.484010 2541 factory.go:221] Registration of the containerd container factory successfully May 14 23:40:27.500908 kubelet[2541]: E0514 23:40:27.500775 2541 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:40:27.520223 kubelet[2541]: I0514 23:40:27.520105 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:40:27.525262 kubelet[2541]: I0514 23:40:27.525097 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:40:27.525262 kubelet[2541]: I0514 23:40:27.525258 2541 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:40:27.526479 kubelet[2541]: I0514 23:40:27.526187 2541 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:40:27.526479 kubelet[2541]: I0514 23:40:27.526377 2541 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:40:27.526686 kubelet[2541]: E0514 23:40:27.526421 2541 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:40:27.558478 kubelet[2541]: I0514 23:40:27.558348 2541 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:40:27.558478 kubelet[2541]: I0514 23:40:27.558479 2541 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:40:27.558691 kubelet[2541]: I0514 23:40:27.558503 2541 state_mem.go:36] "Initialized new in-memory state store" May 14 23:40:27.558716 kubelet[2541]: I0514 23:40:27.558702 2541 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:40:27.558738 kubelet[2541]: I0514 23:40:27.558716 2541 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:40:27.558738 kubelet[2541]: I0514 23:40:27.558738 2541 policy_none.go:49] "None policy: Start" May 14 23:40:27.558787 kubelet[2541]: I0514 23:40:27.558757 2541 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:40:27.558787 kubelet[2541]: I0514 23:40:27.558767 2541 state_mem.go:35] "Initializing new in-memory state store" May 14 23:40:27.559120 kubelet[2541]: I0514 23:40:27.558879 2541 state_mem.go:75] "Updated machine memory state" May 14 23:40:27.565353 kubelet[2541]: I0514 23:40:27.565321 2541 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:40:27.566200 kubelet[2541]: I0514 23:40:27.565734 2541 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:40:27.566200 kubelet[2541]: I0514 23:40:27.565801 2541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:40:27.566200 kubelet[2541]: I0514 23:40:27.566148 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:40:27.567991 kubelet[2541]: E0514 23:40:27.567904 2541 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:40:27.627483 kubelet[2541]: I0514 23:40:27.627423 2541 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:40:27.627808 kubelet[2541]: I0514 23:40:27.627438 2541 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:40:27.628059 kubelet[2541]: I0514 23:40:27.627492 2541 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.643081 kubelet[2541]: E0514 23:40:27.643023 2541 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:40:27.670137 kubelet[2541]: I0514 23:40:27.670110 2541 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:40:27.681591 kubelet[2541]: I0514 23:40:27.681467 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:27.681591 kubelet[2541]: I0514 23:40:27.681528 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:27.681591 kubelet[2541]: I0514 23:40:27.681547 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.681591 kubelet[2541]: I0514 23:40:27.681565 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.681591 kubelet[2541]: I0514 23:40:27.681585 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:40:27.681866 kubelet[2541]: I0514 23:40:27.681603 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.681866 kubelet[2541]: I0514 23:40:27.681619 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.681866 kubelet[2541]: I0514 23:40:27.681635 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:40:27.681866 kubelet[2541]: I0514 23:40:27.681649 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b4ce354d61382acc177b61c4e0b824a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5b4ce354d61382acc177b61c4e0b824a\") " pod="kube-system/kube-apiserver-localhost" May 14 23:40:27.691318 kubelet[2541]: I0514 23:40:27.691236 2541 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 23:40:27.691318 kubelet[2541]: I0514 23:40:27.691325 2541 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:40:27.943107 kubelet[2541]: E0514 23:40:27.943057 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:27.943242 kubelet[2541]: E0514 23:40:27.943116 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:27.943273 kubelet[2541]: E0514 23:40:27.943252 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:28.466858 kubelet[2541]: I0514 23:40:28.466810 2541 apiserver.go:52] "Watching apiserver" May 14 23:40:28.475577 kubelet[2541]: I0514 23:40:28.475534 2541 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:40:28.550480 kubelet[2541]: E0514 23:40:28.550125 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:28.550480 kubelet[2541]: I0514 23:40:28.550225 2541 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:40:28.550480 kubelet[2541]: E0514 23:40:28.550391 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:28.557140 kubelet[2541]: E0514 23:40:28.557096 2541 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:40:28.557265 kubelet[2541]: E0514 23:40:28.557254 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:28.573352 kubelet[2541]: I0514 23:40:28.573252 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.57323649 podStartE2EDuration="1.57323649s" podCreationTimestamp="2025-05-14 23:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:28.572432086 +0000 UTC m=+1.165780690" watchObservedRunningTime="2025-05-14 23:40:28.57323649 +0000 UTC m=+1.166585094" May 14 23:40:28.591960 kubelet[2541]: I0514 23:40:28.591899 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.59188236 podStartE2EDuration="2.59188236s" podCreationTimestamp="2025-05-14 23:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:28.584074995 +0000 UTC m=+1.177423599" watchObservedRunningTime="2025-05-14 23:40:28.59188236 +0000 UTC m=+1.185230964" May 14 23:40:28.601144 kubelet[2541]: I0514 23:40:28.601087 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6010687369999999 podStartE2EDuration="1.601068737s" podCreationTimestamp="2025-05-14 23:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:28.592161272 +0000 UTC m=+1.185509876" watchObservedRunningTime="2025-05-14 23:40:28.601068737 +0000 UTC m=+1.194417341" May 14 23:40:28.876329 sudo[1623]: pam_unix(sudo:session): session closed for user root May 14 23:40:28.878604 sshd[1622]: Connection closed by 10.0.0.1 port 48882 May 14 23:40:28.878432 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 14 23:40:28.882525 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:48882.service: Deactivated successfully. May 14 23:40:28.885820 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:40:28.886052 systemd[1]: session-5.scope: Consumed 6.919s CPU time, 223.4M memory peak. May 14 23:40:28.887485 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. May 14 23:40:28.888449 systemd-logind[1462]: Removed session 5. May 14 23:40:29.551584 kubelet[2541]: E0514 23:40:29.551548 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:29.552231 kubelet[2541]: E0514 23:40:29.552209 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:29.552494 kubelet[2541]: E0514 23:40:29.552476 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:30.553808 kubelet[2541]: E0514 23:40:30.553657 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:31.997836 kubelet[2541]: I0514 23:40:31.997804 2541 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:40:31.998694 containerd[1480]: time="2025-05-14T23:40:31.998563347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:40:31.999021 kubelet[2541]: I0514 23:40:31.998735 2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:40:33.019249 systemd[1]: Created slice kubepods-besteffort-podf70d3a1a_c8c4_4970_b4d1_4d6e947b6765.slice - libcontainer container kubepods-besteffort-podf70d3a1a_c8c4_4970_b4d1_4d6e947b6765.slice. May 14 23:40:33.038667 systemd[1]: Created slice kubepods-burstable-pod0933dee6_eed8_4981_a4d3_e80341f5aa29.slice - libcontainer container kubepods-burstable-pod0933dee6_eed8_4981_a4d3_e80341f5aa29.slice. May 14 23:40:33.115995 kubelet[2541]: I0514 23:40:33.115864 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f70d3a1a-c8c4-4970-b4d1-4d6e947b6765-lib-modules\") pod \"kube-proxy-j4p2q\" (UID: \"f70d3a1a-c8c4-4970-b4d1-4d6e947b6765\") " pod="kube-system/kube-proxy-j4p2q" May 14 23:40:33.115995 kubelet[2541]: I0514 23:40:33.115906 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0933dee6-eed8-4981-a4d3-e80341f5aa29-run\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.115995 kubelet[2541]: I0514 23:40:33.115922 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0933dee6-eed8-4981-a4d3-e80341f5aa29-cni\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.115995 kubelet[2541]: I0514 23:40:33.115941 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvp5\" (UniqueName: \"kubernetes.io/projected/f70d3a1a-c8c4-4970-b4d1-4d6e947b6765-kube-api-access-srvp5\") pod \"kube-proxy-j4p2q\" (UID: \"f70d3a1a-c8c4-4970-b4d1-4d6e947b6765\") " pod="kube-system/kube-proxy-j4p2q" May 14 23:40:33.115995 kubelet[2541]: I0514 23:40:33.115967 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0933dee6-eed8-4981-a4d3-e80341f5aa29-cni-plugin\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.116500 kubelet[2541]: I0514 23:40:33.115982 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0933dee6-eed8-4981-a4d3-e80341f5aa29-flannel-cfg\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.116500 kubelet[2541]: I0514 23:40:33.115999 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0933dee6-eed8-4981-a4d3-e80341f5aa29-xtables-lock\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.116500 kubelet[2541]: I0514 23:40:33.116014 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7mv2\" (UniqueName: \"kubernetes.io/projected/0933dee6-eed8-4981-a4d3-e80341f5aa29-kube-api-access-d7mv2\") pod \"kube-flannel-ds-jsc9w\" (UID: \"0933dee6-eed8-4981-a4d3-e80341f5aa29\") " pod="kube-flannel/kube-flannel-ds-jsc9w" May 14 23:40:33.116500 kubelet[2541]: I0514 23:40:33.116037 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f70d3a1a-c8c4-4970-b4d1-4d6e947b6765-kube-proxy\") pod \"kube-proxy-j4p2q\" (UID: \"f70d3a1a-c8c4-4970-b4d1-4d6e947b6765\") " pod="kube-system/kube-proxy-j4p2q" May 14 23:40:33.116500 kubelet[2541]: I0514 23:40:33.116056 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f70d3a1a-c8c4-4970-b4d1-4d6e947b6765-xtables-lock\") pod \"kube-proxy-j4p2q\" (UID: \"f70d3a1a-c8c4-4970-b4d1-4d6e947b6765\") " pod="kube-system/kube-proxy-j4p2q" May 14 23:40:33.336799 kubelet[2541]: E0514 23:40:33.336661 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:33.337321 containerd[1480]: time="2025-05-14T23:40:33.337233931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4p2q,Uid:f70d3a1a-c8c4-4970-b4d1-4d6e947b6765,Namespace:kube-system,Attempt:0,}" May 14 23:40:33.345022 kubelet[2541]: E0514 23:40:33.344930 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:33.345888 containerd[1480]: time="2025-05-14T23:40:33.345847855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jsc9w,Uid:0933dee6-eed8-4981-a4d3-e80341f5aa29,Namespace:kube-flannel,Attempt:0,}" May 14 23:40:33.354691 containerd[1480]: time="2025-05-14T23:40:33.354612283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:33.354691 containerd[1480]: time="2025-05-14T23:40:33.354673923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:33.354808 containerd[1480]: time="2025-05-14T23:40:33.354686275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:33.354829 containerd[1480]: time="2025-05-14T23:40:33.354809555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:33.371675 systemd[1]: Started cri-containerd-2dfbd7ff8902e7dc90e4779b7084f81defe6ecfbfc78b5ce58da3219ba8e7061.scope - libcontainer container 2dfbd7ff8902e7dc90e4779b7084f81defe6ecfbfc78b5ce58da3219ba8e7061. May 14 23:40:33.374288 containerd[1480]: time="2025-05-14T23:40:33.373387893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:33.374288 containerd[1480]: time="2025-05-14T23:40:33.373999339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:33.374288 containerd[1480]: time="2025-05-14T23:40:33.374013850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:33.375100 containerd[1480]: time="2025-05-14T23:40:33.374108029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:33.391696 systemd[1]: Started cri-containerd-2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32.scope - libcontainer container 2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32. May 14 23:40:33.401886 containerd[1480]: time="2025-05-14T23:40:33.401835746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4p2q,Uid:f70d3a1a-c8c4-4970-b4d1-4d6e947b6765,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dfbd7ff8902e7dc90e4779b7084f81defe6ecfbfc78b5ce58da3219ba8e7061\"" May 14 23:40:33.402981 kubelet[2541]: E0514 23:40:33.402747 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:33.405040 containerd[1480]: time="2025-05-14T23:40:33.405002503Z" level=info msg="CreateContainer within sandbox \"2dfbd7ff8902e7dc90e4779b7084f81defe6ecfbfc78b5ce58da3219ba8e7061\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:40:33.418948 containerd[1480]: time="2025-05-14T23:40:33.418862045Z" level=info msg="CreateContainer within sandbox \"2dfbd7ff8902e7dc90e4779b7084f81defe6ecfbfc78b5ce58da3219ba8e7061\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"063d9578716305819c57bd6536ba9d9d51bf8f030964efae7d1592439b5175ac\"" May 14 23:40:33.420866 containerd[1480]: time="2025-05-14T23:40:33.419358005Z" level=info msg="StartContainer for \"063d9578716305819c57bd6536ba9d9d51bf8f030964efae7d1592439b5175ac\"" May 14 23:40:33.424794 containerd[1480]: time="2025-05-14T23:40:33.424765158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jsc9w,Uid:0933dee6-eed8-4981-a4d3-e80341f5aa29,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\"" May 14 23:40:33.428042 kubelet[2541]: E0514 23:40:33.427981 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:33.429293 containerd[1480]: time="2025-05-14T23:40:33.429214048Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 14 23:40:33.452622 systemd[1]: Started cri-containerd-063d9578716305819c57bd6536ba9d9d51bf8f030964efae7d1592439b5175ac.scope - libcontainer container 063d9578716305819c57bd6536ba9d9d51bf8f030964efae7d1592439b5175ac. May 14 23:40:33.479762 containerd[1480]: time="2025-05-14T23:40:33.479714438Z" level=info msg="StartContainer for \"063d9578716305819c57bd6536ba9d9d51bf8f030964efae7d1592439b5175ac\" returns successfully" May 14 23:40:33.567479 kubelet[2541]: E0514 23:40:33.565715 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:34.611475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491265679.mount: Deactivated successfully. May 14 23:40:34.638965 containerd[1480]: time="2025-05-14T23:40:34.638914543Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:34.639587 containerd[1480]: time="2025-05-14T23:40:34.639542124Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 14 23:40:34.640192 containerd[1480]: time="2025-05-14T23:40:34.640150796Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:34.642886 containerd[1480]: time="2025-05-14T23:40:34.642849364Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:34.643785 containerd[1480]: time="2025-05-14T23:40:34.643752538Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.214501193s" May 14 23:40:34.643824 containerd[1480]: time="2025-05-14T23:40:34.643784879Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 14 23:40:34.645744 containerd[1480]: time="2025-05-14T23:40:34.645713712Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 14 23:40:34.655344 containerd[1480]: time="2025-05-14T23:40:34.655236355Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6\"" May 14 23:40:34.655382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369756232.mount: Deactivated successfully. May 14 23:40:34.656119 containerd[1480]: time="2025-05-14T23:40:34.655981304Z" level=info msg="StartContainer for \"1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6\"" May 14 23:40:34.696674 systemd[1]: Started cri-containerd-1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6.scope - libcontainer container 1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6. May 14 23:40:34.723129 containerd[1480]: time="2025-05-14T23:40:34.723024489Z" level=info msg="StartContainer for \"1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6\" returns successfully" May 14 23:40:34.724446 systemd[1]: cri-containerd-1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6.scope: Deactivated successfully. May 14 23:40:34.777540 containerd[1480]: time="2025-05-14T23:40:34.777480404Z" level=info msg="shim disconnected" id=1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6 namespace=k8s.io May 14 23:40:34.777540 containerd[1480]: time="2025-05-14T23:40:34.777535931Z" level=warning msg="cleaning up after shim disconnected" id=1ef4bd0ad3ae576db4e3a8a016620ef4fb8f58314b9497b1b1bf1e1651c6b1f6 namespace=k8s.io May 14 23:40:34.777540 containerd[1480]: time="2025-05-14T23:40:34.777569750Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:40:35.577498 kubelet[2541]: E0514 23:40:35.577423 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:35.579532 containerd[1480]: time="2025-05-14T23:40:35.579485221Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 14 23:40:35.589967 kubelet[2541]: I0514 23:40:35.589754 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4p2q" podStartSLOduration=3.589734771 podStartE2EDuration="3.589734771s" podCreationTimestamp="2025-05-14 23:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:33.580568912 +0000 UTC m=+6.173917516" watchObservedRunningTime="2025-05-14 23:40:35.589734771 +0000 UTC m=+8.183083415" May 14 23:40:35.938148 kubelet[2541]: E0514 23:40:35.937964 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:36.579060 kubelet[2541]: E0514 23:40:36.579029 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:37.168384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172812937.mount: Deactivated successfully. May 14 23:40:37.580382 kubelet[2541]: E0514 23:40:37.580244 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:37.722777 containerd[1480]: time="2025-05-14T23:40:37.722540563Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:37.723758 containerd[1480]: time="2025-05-14T23:40:37.723722734Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" May 14 23:40:37.724552 containerd[1480]: time="2025-05-14T23:40:37.724521736Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:37.727869 containerd[1480]: time="2025-05-14T23:40:37.727833446Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:40:37.729284 containerd[1480]: time="2025-05-14T23:40:37.729251980Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.14972874s" May 14 23:40:37.729374 containerd[1480]: time="2025-05-14T23:40:37.729359686Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 14 23:40:37.731833 containerd[1480]: time="2025-05-14T23:40:37.731805388Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 23:40:37.741613 containerd[1480]: time="2025-05-14T23:40:37.741569884Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0\"" May 14 23:40:37.742142 containerd[1480]: time="2025-05-14T23:40:37.741905037Z" level=info msg="StartContainer for \"89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0\"" May 14 23:40:37.771699 systemd[1]: Started cri-containerd-89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0.scope - libcontainer container 89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0. May 14 23:40:37.795703 containerd[1480]: time="2025-05-14T23:40:37.795584497Z" level=info msg="StartContainer for \"89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0\" returns successfully" May 14 23:40:37.799660 systemd[1]: cri-containerd-89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0.scope: Deactivated successfully. May 14 23:40:37.819178 containerd[1480]: time="2025-05-14T23:40:37.819121613Z" level=info msg="shim disconnected" id=89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0 namespace=k8s.io May 14 23:40:37.819178 containerd[1480]: time="2025-05-14T23:40:37.819174666Z" level=warning msg="cleaning up after shim disconnected" id=89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0 namespace=k8s.io May 14 23:40:37.819375 containerd[1480]: time="2025-05-14T23:40:37.819185341Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:40:37.844941 kubelet[2541]: I0514 23:40:37.844758 2541 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 23:40:37.873757 systemd[1]: Created slice kubepods-burstable-pod4609a764_5c00_4d78_b3ef_760ecba86972.slice - libcontainer container kubepods-burstable-pod4609a764_5c00_4d78_b3ef_760ecba86972.slice. May 14 23:40:37.879844 systemd[1]: Created slice kubepods-burstable-pod63cc6ff3_370b_4952_8196_81ec1bca61d4.slice - libcontainer container kubepods-burstable-pod63cc6ff3_370b_4952_8196_81ec1bca61d4.slice. May 14 23:40:37.951610 kubelet[2541]: I0514 23:40:37.951558 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbqr\" (UniqueName: \"kubernetes.io/projected/63cc6ff3-370b-4952-8196-81ec1bca61d4-kube-api-access-lvbqr\") pod \"coredns-668d6bf9bc-57ltg\" (UID: \"63cc6ff3-370b-4952-8196-81ec1bca61d4\") " pod="kube-system/coredns-668d6bf9bc-57ltg" May 14 23:40:37.951610 kubelet[2541]: I0514 23:40:37.951611 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l2l\" (UniqueName: \"kubernetes.io/projected/4609a764-5c00-4d78-b3ef-760ecba86972-kube-api-access-z5l2l\") pod \"coredns-668d6bf9bc-tl544\" (UID: \"4609a764-5c00-4d78-b3ef-760ecba86972\") " pod="kube-system/coredns-668d6bf9bc-tl544" May 14 23:40:37.951848 kubelet[2541]: I0514 23:40:37.951631 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4609a764-5c00-4d78-b3ef-760ecba86972-config-volume\") pod \"coredns-668d6bf9bc-tl544\" (UID: \"4609a764-5c00-4d78-b3ef-760ecba86972\") " pod="kube-system/coredns-668d6bf9bc-tl544" May 14 23:40:37.951848 kubelet[2541]: I0514 23:40:37.951653 2541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63cc6ff3-370b-4952-8196-81ec1bca61d4-config-volume\") pod \"coredns-668d6bf9bc-57ltg\" (UID: \"63cc6ff3-370b-4952-8196-81ec1bca61d4\") " pod="kube-system/coredns-668d6bf9bc-57ltg" May 14 23:40:38.103767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89628bb73ba9f055b34cb34d22e7d58880b8d4040343015c372a07634393ffd0-rootfs.mount: Deactivated successfully. May 14 23:40:38.179259 kubelet[2541]: E0514 23:40:38.179217 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:38.179864 containerd[1480]: time="2025-05-14T23:40:38.179793181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tl544,Uid:4609a764-5c00-4d78-b3ef-760ecba86972,Namespace:kube-system,Attempt:0,}" May 14 23:40:38.182585 kubelet[2541]: E0514 23:40:38.182555 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:38.183104 containerd[1480]: time="2025-05-14T23:40:38.183071010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57ltg,Uid:63cc6ff3-370b-4952-8196-81ec1bca61d4,Namespace:kube-system,Attempt:0,}" May 14 23:40:38.288383 containerd[1480]: time="2025-05-14T23:40:38.288335693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tl544,Uid:4609a764-5c00-4d78-b3ef-760ecba86972,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:40:38.288722 kubelet[2541]: E0514 23:40:38.288685 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:40:38.288791 kubelet[2541]: E0514 23:40:38.288752 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-tl544" May 14 23:40:38.288791 kubelet[2541]: E0514 23:40:38.288772 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-tl544" May 14 23:40:38.288883 kubelet[2541]: E0514 23:40:38.288855 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tl544_kube-system(4609a764-5c00-4d78-b3ef-760ecba86972)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tl544_kube-system(4609a764-5c00-4d78-b3ef-760ecba86972)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-tl544" podUID="4609a764-5c00-4d78-b3ef-760ecba86972" May 14 23:40:38.290904 containerd[1480]: time="2025-05-14T23:40:38.290869589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57ltg,Uid:63cc6ff3-370b-4952-8196-81ec1bca61d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:40:38.291192 kubelet[2541]: E0514 23:40:38.291158 2541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:40:38.291256 kubelet[2541]: E0514 23:40:38.291204 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-57ltg" May 14 23:40:38.291256 kubelet[2541]: E0514 23:40:38.291220 2541 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-57ltg" May 14 23:40:38.291307 kubelet[2541]: E0514 23:40:38.291264 2541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-57ltg_kube-system(63cc6ff3-370b-4952-8196-81ec1bca61d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-57ltg_kube-system(63cc6ff3-370b-4952-8196-81ec1bca61d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-57ltg" podUID="63cc6ff3-370b-4952-8196-81ec1bca61d4" May 14 23:40:38.583037 kubelet[2541]: E0514 23:40:38.582946 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:38.589643 containerd[1480]: time="2025-05-14T23:40:38.589588172Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 14 23:40:38.610958 containerd[1480]: time="2025-05-14T23:40:38.610896621Z" level=info msg="CreateContainer within sandbox \"2b04d156cc4f29c5276e7cd68788eb061dd4b5d1e05df4a15b89d5b939a68f32\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a41040dfa383088b9575f5990c21f33e8b2d46e78d470449dff4dc38aa489852\"" May 14 23:40:38.611764 containerd[1480]: time="2025-05-14T23:40:38.611701365Z" level=info msg="StartContainer for \"a41040dfa383088b9575f5990c21f33e8b2d46e78d470449dff4dc38aa489852\"" May 14 23:40:38.635672 systemd[1]: Started cri-containerd-a41040dfa383088b9575f5990c21f33e8b2d46e78d470449dff4dc38aa489852.scope - libcontainer container a41040dfa383088b9575f5990c21f33e8b2d46e78d470449dff4dc38aa489852. May 14 23:40:38.662214 containerd[1480]: time="2025-05-14T23:40:38.662163360Z" level=info msg="StartContainer for \"a41040dfa383088b9575f5990c21f33e8b2d46e78d470449dff4dc38aa489852\" returns successfully" May 14 23:40:38.859540 kubelet[2541]: E0514 23:40:38.859162 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:39.102947 systemd[1]: run-netns-cni\x2d81d21ba7\x2d7955\x2dd828\x2dd38e\x2dc02b3ac153ea.mount: Deactivated successfully. May 14 23:40:39.103039 systemd[1]: run-netns-cni\x2df58685e0\x2d0e92\x2dbf73\x2d0291\x2d17cc68346c6d.mount: Deactivated successfully. May 14 23:40:39.103095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7285c796b6736c7fb8e3a5208d6629d3aadbc9fefcff38fc7d4c303fdc10e9b-shm.mount: Deactivated successfully. May 14 23:40:39.103144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31c8b6d68978b6ce7c8c7dbbe3411ffc89306f7cbc8119394c542569d3a041b5-shm.mount: Deactivated successfully. May 14 23:40:39.356552 kubelet[2541]: E0514 23:40:39.356498 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:39.586148 kubelet[2541]: E0514 23:40:39.586097 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:39.586935 kubelet[2541]: E0514 23:40:39.586750 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:39.742612 systemd-networkd[1406]: flannel.1: Link UP May 14 23:40:39.742618 systemd-networkd[1406]: flannel.1: Gained carrier May 14 23:40:40.105169 update_engine[1464]: I20250514 23:40:40.104996 1464 update_attempter.cc:509] Updating boot flags... May 14 23:40:40.141448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3198) May 14 23:40:40.176732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3202) May 14 23:40:40.205481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3202) May 14 23:40:40.588388 kubelet[2541]: E0514 23:40:40.588359 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:41.750598 systemd-networkd[1406]: flannel.1: Gained IPv6LL May 14 23:40:50.527847 kubelet[2541]: E0514 23:40:50.527800 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:50.528259 containerd[1480]: time="2025-05-14T23:40:50.528205398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57ltg,Uid:63cc6ff3-370b-4952-8196-81ec1bca61d4,Namespace:kube-system,Attempt:0,}" May 14 23:40:50.555810 systemd-networkd[1406]: cni0: Link UP May 14 23:40:50.555818 systemd-networkd[1406]: cni0: Gained carrier May 14 23:40:50.559503 systemd-networkd[1406]: cni0: Lost carrier May 14 23:40:50.562014 systemd-networkd[1406]: vethc1c9923e: Link UP May 14 23:40:50.565886 kernel: cni0: port 1(vethc1c9923e) entered blocking state May 14 23:40:50.565953 kernel: cni0: port 1(vethc1c9923e) entered disabled state May 14 23:40:50.565973 kernel: vethc1c9923e: entered allmulticast mode May 14 23:40:50.565992 kernel: vethc1c9923e: entered promiscuous mode May 14 23:40:50.566959 kernel: cni0: port 1(vethc1c9923e) entered blocking state May 14 23:40:50.567898 kernel: cni0: port 1(vethc1c9923e) entered forwarding state May 14 23:40:50.577933 kernel: cni0: port 1(vethc1c9923e) entered disabled state May 14 23:40:50.583991 kernel: cni0: port 1(vethc1c9923e) entered blocking state May 14 23:40:50.584206 kernel: cni0: port 1(vethc1c9923e) entered forwarding state May 14 23:40:50.584089 systemd-networkd[1406]: vethc1c9923e: Gained carrier May 14 23:40:50.584339 systemd-networkd[1406]: cni0: Gained carrier May 14 23:40:50.586815 containerd[1480]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} May 14 23:40:50.586815 containerd[1480]: delegateAdd: netconf sent to delegate plugin: May 14 23:40:50.605962 containerd[1480]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T23:40:50.605863166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:50.605962 containerd[1480]: time="2025-05-14T23:40:50.605923992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:50.605962 containerd[1480]: time="2025-05-14T23:40:50.605940349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:50.606334 containerd[1480]: time="2025-05-14T23:40:50.606030250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:50.627643 systemd[1]: Started cri-containerd-a1ff8db96133629d3128f83a97d9849e1721a6171053543035356214aa35027b.scope - libcontainer container a1ff8db96133629d3128f83a97d9849e1721a6171053543035356214aa35027b. May 14 23:40:50.638771 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:40:50.656305 containerd[1480]: time="2025-05-14T23:40:50.656219209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57ltg,Uid:63cc6ff3-370b-4952-8196-81ec1bca61d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ff8db96133629d3128f83a97d9849e1721a6171053543035356214aa35027b\"" May 14 23:40:50.657183 kubelet[2541]: E0514 23:40:50.657141 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:50.660080 containerd[1480]: time="2025-05-14T23:40:50.659970002Z" level=info msg="CreateContainer within sandbox \"a1ff8db96133629d3128f83a97d9849e1721a6171053543035356214aa35027b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:40:50.683639 containerd[1480]: time="2025-05-14T23:40:50.683596877Z" level=info msg="CreateContainer within sandbox \"a1ff8db96133629d3128f83a97d9849e1721a6171053543035356214aa35027b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b873a858b236b5866700dc16d227ac231c1dd1678313ad20340ebef60af805c0\"" May 14 23:40:50.684418 containerd[1480]: time="2025-05-14T23:40:50.684389546Z" level=info msg="StartContainer for \"b873a858b236b5866700dc16d227ac231c1dd1678313ad20340ebef60af805c0\"" May 14 23:40:50.711723 systemd[1]: Started cri-containerd-b873a858b236b5866700dc16d227ac231c1dd1678313ad20340ebef60af805c0.scope - libcontainer container b873a858b236b5866700dc16d227ac231c1dd1678313ad20340ebef60af805c0. May 14 23:40:50.738975 containerd[1480]: time="2025-05-14T23:40:50.738923131Z" level=info msg="StartContainer for \"b873a858b236b5866700dc16d227ac231c1dd1678313ad20340ebef60af805c0\" returns successfully" May 14 23:40:51.527206 kubelet[2541]: E0514 23:40:51.527165 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:51.527622 containerd[1480]: time="2025-05-14T23:40:51.527585745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tl544,Uid:4609a764-5c00-4d78-b3ef-760ecba86972,Namespace:kube-system,Attempt:0,}" May 14 23:40:51.544002 systemd-networkd[1406]: vethdffd7d13: Link UP May 14 23:40:51.546148 kernel: cni0: port 2(vethdffd7d13) entered blocking state May 14 23:40:51.546213 kernel: cni0: port 2(vethdffd7d13) entered disabled state May 14 23:40:51.546247 kernel: vethdffd7d13: entered allmulticast mode May 14 23:40:51.546799 kernel: vethdffd7d13: entered promiscuous mode May 14 23:40:51.547601 kernel: cni0: port 2(vethdffd7d13) entered blocking state May 14 23:40:51.548971 kernel: cni0: port 2(vethdffd7d13) entered forwarding state May 14 23:40:51.553467 systemd-networkd[1406]: vethdffd7d13: Gained carrier May 14 23:40:51.556935 containerd[1480]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001c938), "name":"cbr0", "type":"bridge"} May 14 23:40:51.556935 containerd[1480]: delegateAdd: netconf sent to delegate plugin: May 14 23:40:51.582190 containerd[1480]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T23:40:51.582072752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:40:51.582190 containerd[1480]: time="2025-05-14T23:40:51.582127701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:40:51.582190 containerd[1480]: time="2025-05-14T23:40:51.582138539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:51.583856 containerd[1480]: time="2025-05-14T23:40:51.582601566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:40:51.609189 kubelet[2541]: E0514 23:40:51.609154 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:51.609639 systemd[1]: Started cri-containerd-8cb418056051b53be938986f67af366c381dde23b805eb913d44f60e4065ff43.scope - libcontainer container 8cb418056051b53be938986f67af366c381dde23b805eb913d44f60e4065ff43. May 14 23:40:51.622817 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:40:51.639839 containerd[1480]: time="2025-05-14T23:40:51.639759074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tl544,Uid:4609a764-5c00-4d78-b3ef-760ecba86972,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cb418056051b53be938986f67af366c381dde23b805eb913d44f60e4065ff43\"" May 14 23:40:51.650570 containerd[1480]: time="2025-05-14T23:40:51.645113394Z" level=info msg="CreateContainer within sandbox \"8cb418056051b53be938986f67af366c381dde23b805eb913d44f60e4065ff43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:40:51.662542 kubelet[2541]: E0514 23:40:51.640958 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:51.664427 kubelet[2541]: I0514 23:40:51.664365 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-57ltg" podStartSLOduration=18.664342035 podStartE2EDuration="18.664342035s" podCreationTimestamp="2025-05-14 23:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:51.663924919 +0000 UTC m=+24.257273563" watchObservedRunningTime="2025-05-14 23:40:51.664342035 +0000 UTC m=+24.257690639" May 14 23:40:51.665339 kubelet[2541]: I0514 23:40:51.665292 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jsc9w" podStartSLOduration=14.363590219 podStartE2EDuration="18.664955431s" podCreationTimestamp="2025-05-14 23:40:33 +0000 UTC" firstStartedPulling="2025-05-14 23:40:33.428816665 +0000 UTC m=+6.022165269" lastFinishedPulling="2025-05-14 23:40:37.730181877 +0000 UTC m=+10.323530481" observedRunningTime="2025-05-14 23:40:39.598284428 +0000 UTC m=+12.191632992" watchObservedRunningTime="2025-05-14 23:40:51.664955431 +0000 UTC m=+24.258304035" May 14 23:40:51.677992 containerd[1480]: time="2025-05-14T23:40:51.677949609Z" level=info msg="CreateContainer within sandbox \"8cb418056051b53be938986f67af366c381dde23b805eb913d44f60e4065ff43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"328fbb2812e38ddcebbc5ef12e25276b14c630832ed511f57d68b5ba4c0af2e0\"" May 14 23:40:51.679947 containerd[1480]: time="2025-05-14T23:40:51.679808234Z" level=info msg="StartContainer for \"328fbb2812e38ddcebbc5ef12e25276b14c630832ed511f57d68b5ba4c0af2e0\"" May 14 23:40:51.710689 systemd[1]: Started cri-containerd-328fbb2812e38ddcebbc5ef12e25276b14c630832ed511f57d68b5ba4c0af2e0.scope - libcontainer container 328fbb2812e38ddcebbc5ef12e25276b14c630832ed511f57d68b5ba4c0af2e0. May 14 23:40:51.733963 containerd[1480]: time="2025-05-14T23:40:51.733783385Z" level=info msg="StartContainer for \"328fbb2812e38ddcebbc5ef12e25276b14c630832ed511f57d68b5ba4c0af2e0\" returns successfully" May 14 23:40:51.798576 systemd-networkd[1406]: cni0: Gained IPv6LL May 14 23:40:51.990640 systemd-networkd[1406]: vethc1c9923e: Gained IPv6LL May 14 23:40:51.999948 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:36698.service - OpenSSH per-connection server daemon (10.0.0.1:36698). May 14 23:40:52.053276 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 36698 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:40:52.054495 sshd-session[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:52.059532 systemd-logind[1462]: New session 6 of user core. May 14 23:40:52.076683 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:40:52.202013 sshd[3500]: Connection closed by 10.0.0.1 port 36698 May 14 23:40:52.204450 sshd-session[3498]: pam_unix(sshd:session): session closed for user core May 14 23:40:52.207979 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:36698.service: Deactivated successfully. May 14 23:40:52.210036 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:40:52.210884 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. May 14 23:40:52.211983 systemd-logind[1462]: Removed session 6. May 14 23:40:52.612997 kubelet[2541]: E0514 23:40:52.612593 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:52.612997 kubelet[2541]: E0514 23:40:52.612694 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:52.632584 systemd-networkd[1406]: vethdffd7d13: Gained IPv6LL May 14 23:40:53.614328 kubelet[2541]: E0514 23:40:53.614285 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:40:57.216379 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:43018.service - OpenSSH per-connection server daemon (10.0.0.1:43018). May 14 23:40:57.261722 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 43018 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:40:57.263082 sshd-session[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:40:57.268633 systemd-logind[1462]: New session 7 of user core. May 14 23:40:57.280663 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:40:57.406395 sshd[3540]: Connection closed by 10.0.0.1 port 43018 May 14 23:40:57.406869 sshd-session[3538]: pam_unix(sshd:session): session closed for user core May 14 23:40:57.410190 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. May 14 23:40:57.410531 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:43018.service: Deactivated successfully. May 14 23:40:57.412132 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:40:57.413189 systemd-logind[1462]: Removed session 7. May 14 23:41:02.421598 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). May 14 23:41:02.468234 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:02.469562 sshd-session[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:02.474143 systemd-logind[1462]: New session 8 of user core. May 14 23:41:02.484642 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:41:02.598293 sshd[3579]: Connection closed by 10.0.0.1 port 43030 May 14 23:41:02.598735 sshd-session[3577]: pam_unix(sshd:session): session closed for user core May 14 23:41:02.609680 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:43030.service: Deactivated successfully. May 14 23:41:02.611923 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:41:02.613515 kubelet[2541]: E0514 23:41:02.613477 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:02.614572 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. May 14 23:41:02.622738 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:43252.service - OpenSSH per-connection server daemon (10.0.0.1:43252). May 14 23:41:02.623905 systemd-logind[1462]: Removed session 8. May 14 23:41:02.626429 kubelet[2541]: I0514 23:41:02.626362 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tl544" podStartSLOduration=29.626347196 podStartE2EDuration="29.626347196s" podCreationTimestamp="2025-05-14 23:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:40:52.649638025 +0000 UTC m=+25.242986669" watchObservedRunningTime="2025-05-14 23:41:02.626347196 +0000 UTC m=+35.219695840" May 14 23:41:02.633956 kubelet[2541]: E0514 23:41:02.633882 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:02.662335 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 43252 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:02.665185 sshd-session[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:02.669417 systemd-logind[1462]: New session 9 of user core. May 14 23:41:02.680623 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:41:02.828969 sshd[3599]: Connection closed by 10.0.0.1 port 43252 May 14 23:41:02.830496 sshd-session[3592]: pam_unix(sshd:session): session closed for user core May 14 23:41:02.843410 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:43252.service: Deactivated successfully. May 14 23:41:02.845159 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:41:02.846634 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. May 14 23:41:02.858121 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:43260.service - OpenSSH per-connection server daemon (10.0.0.1:43260). May 14 23:41:02.860021 systemd-logind[1462]: Removed session 9. May 14 23:41:02.901815 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 43260 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:02.903012 sshd-session[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:02.907935 systemd-logind[1462]: New session 10 of user core. May 14 23:41:02.926628 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:41:03.037406 sshd[3613]: Connection closed by 10.0.0.1 port 43260 May 14 23:41:03.037695 sshd-session[3610]: pam_unix(sshd:session): session closed for user core May 14 23:41:03.040777 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:43260.service: Deactivated successfully. May 14 23:41:03.043607 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:41:03.044751 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. May 14 23:41:03.046163 systemd-logind[1462]: Removed session 10. May 14 23:41:08.053196 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:43272.service - OpenSSH per-connection server daemon (10.0.0.1:43272). May 14 23:41:08.093105 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 43272 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:08.094382 sshd-session[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:08.098851 systemd-logind[1462]: New session 11 of user core. May 14 23:41:08.109672 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:41:08.228898 sshd[3653]: Connection closed by 10.0.0.1 port 43272 May 14 23:41:08.229324 sshd-session[3651]: pam_unix(sshd:session): session closed for user core May 14 23:41:08.242440 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:43272.service: Deactivated successfully. May 14 23:41:08.248207 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:41:08.250340 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. May 14 23:41:08.260961 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:43288.service - OpenSSH per-connection server daemon (10.0.0.1:43288). May 14 23:41:08.261592 systemd-logind[1462]: Removed session 11. May 14 23:41:08.297041 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 43288 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:08.298402 sshd-session[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:08.303543 systemd-logind[1462]: New session 12 of user core. May 14 23:41:08.308765 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:41:08.545858 sshd[3669]: Connection closed by 10.0.0.1 port 43288 May 14 23:41:08.546399 sshd-session[3666]: pam_unix(sshd:session): session closed for user core May 14 23:41:08.559213 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:43288.service: Deactivated successfully. May 14 23:41:08.561194 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:41:08.563141 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. May 14 23:41:08.564912 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:43302.service - OpenSSH per-connection server daemon (10.0.0.1:43302). May 14 23:41:08.565879 systemd-logind[1462]: Removed session 12. May 14 23:41:08.612031 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 43302 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:08.613312 sshd-session[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:08.617647 systemd-logind[1462]: New session 13 of user core. May 14 23:41:08.629673 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:41:09.442272 sshd[3682]: Connection closed by 10.0.0.1 port 43302 May 14 23:41:09.443752 sshd-session[3679]: pam_unix(sshd:session): session closed for user core May 14 23:41:09.454534 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:43302.service: Deactivated successfully. May 14 23:41:09.457348 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:41:09.462602 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. May 14 23:41:09.467938 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:43318.service - OpenSSH per-connection server daemon (10.0.0.1:43318). May 14 23:41:09.469764 systemd-logind[1462]: Removed session 13. May 14 23:41:09.509611 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 43318 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:09.510872 sshd-session[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:09.517215 systemd-logind[1462]: New session 14 of user core. May 14 23:41:09.528718 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:41:09.748638 sshd[3702]: Connection closed by 10.0.0.1 port 43318 May 14 23:41:09.750157 sshd-session[3699]: pam_unix(sshd:session): session closed for user core May 14 23:41:09.761862 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:43318.service: Deactivated successfully. May 14 23:41:09.764126 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:41:09.765440 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. May 14 23:41:09.776872 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:43326.service - OpenSSH per-connection server daemon (10.0.0.1:43326). May 14 23:41:09.778151 systemd-logind[1462]: Removed session 14. May 14 23:41:09.813024 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 43326 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:09.814352 sshd-session[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:09.818973 systemd-logind[1462]: New session 15 of user core. May 14 23:41:09.829642 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:41:09.941001 sshd[3716]: Connection closed by 10.0.0.1 port 43326 May 14 23:41:09.941541 sshd-session[3713]: pam_unix(sshd:session): session closed for user core May 14 23:41:09.945163 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:43326.service: Deactivated successfully. May 14 23:41:09.947235 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:41:09.948799 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. May 14 23:41:09.950215 systemd-logind[1462]: Removed session 15. May 14 23:41:14.960957 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:56634.service - OpenSSH per-connection server daemon (10.0.0.1:56634). May 14 23:41:14.995930 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 56634 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:14.997223 sshd-session[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:15.001795 systemd-logind[1462]: New session 16 of user core. May 14 23:41:15.008646 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:41:15.125184 sshd[3776]: Connection closed by 10.0.0.1 port 56634 May 14 23:41:15.125441 sshd-session[3772]: pam_unix(sshd:session): session closed for user core May 14 23:41:15.128209 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. May 14 23:41:15.129153 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:56634.service: Deactivated successfully. May 14 23:41:15.131031 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:41:15.132800 systemd-logind[1462]: Removed session 16. May 14 23:41:20.137597 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:56638.service - OpenSSH per-connection server daemon (10.0.0.1:56638). May 14 23:41:20.180719 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 56638 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:20.181989 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:20.186618 systemd-logind[1462]: New session 17 of user core. May 14 23:41:20.193684 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:41:20.305345 sshd[3813]: Connection closed by 10.0.0.1 port 56638 May 14 23:41:20.305915 sshd-session[3811]: pam_unix(sshd:session): session closed for user core May 14 23:41:20.309324 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. May 14 23:41:20.309844 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:56638.service: Deactivated successfully. May 14 23:41:20.311862 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:41:20.313280 systemd-logind[1462]: Removed session 17. May 14 23:41:25.327896 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:44110.service - OpenSSH per-connection server daemon (10.0.0.1:44110). May 14 23:41:25.363865 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 44110 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:41:25.365204 sshd-session[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:25.372767 systemd-logind[1462]: New session 18 of user core. May 14 23:41:25.380818 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:41:25.508758 sshd[3852]: Connection closed by 10.0.0.1 port 44110 May 14 23:41:25.509100 sshd-session[3850]: pam_unix(sshd:session): session closed for user core May 14 23:41:25.512654 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:44110.service: Deactivated successfully. May 14 23:41:25.514491 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:41:25.515234 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. May 14 23:41:25.516322 systemd-logind[1462]: Removed session 18.