May 7 23:56:45.884682 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 7 23:56:45.884703 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 7 23:56:45.884713 kernel: KASLR enabled May 7 23:56:45.884718 kernel: efi: EFI v2.7 by EDK II May 7 23:56:45.884724 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 7 23:56:45.884729 kernel: random: crng init done May 7 23:56:45.884736 kernel: secureboot: Secure boot disabled May 7 23:56:45.884741 kernel: ACPI: Early table checksum verification disabled May 7 23:56:45.884747 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 7 23:56:45.884754 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 7 23:56:45.884760 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884766 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884771 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884777 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884784 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884791 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884798 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884804 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884810 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:56:45.884816 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 7 23:56:45.884822 kernel: NUMA: Failed to initialise from firmware May 7 23:56:45.884829 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:56:45.884835 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 7 23:56:45.884841 kernel: Zone ranges: May 7 23:56:45.884861 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:56:45.884869 kernel: DMA32 empty May 7 23:56:45.884875 kernel: Normal empty May 7 23:56:45.884882 kernel: Movable zone start for each node May 7 23:56:45.884888 kernel: Early memory node ranges May 7 23:56:45.884894 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 7 23:56:45.884901 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 7 23:56:45.884907 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 7 23:56:45.884913 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 7 23:56:45.884919 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 7 23:56:45.884925 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 7 23:56:45.884931 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 7 23:56:45.884937 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 7 23:56:45.884945 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 7 23:56:45.884951 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:56:45.884957 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 7 23:56:45.884966 kernel: psci: probing for conduit method from ACPI. May 7 23:56:45.884972 kernel: psci: PSCIv1.1 detected in firmware. May 7 23:56:45.884979 kernel: psci: Using standard PSCI v0.2 function IDs May 7 23:56:45.884987 kernel: psci: Trusted OS migration not required May 7 23:56:45.884993 kernel: psci: SMC Calling Convention v1.1 May 7 23:56:45.885000 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 7 23:56:45.885006 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 7 23:56:45.885013 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 7 23:56:45.885019 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 7 23:56:45.885026 kernel: Detected PIPT I-cache on CPU0 May 7 23:56:45.885032 kernel: CPU features: detected: GIC system register CPU interface May 7 23:56:45.885038 kernel: CPU features: detected: Hardware dirty bit management May 7 23:56:45.885045 kernel: CPU features: detected: Spectre-v4 May 7 23:56:45.885052 kernel: CPU features: detected: Spectre-BHB May 7 23:56:45.885059 kernel: CPU features: kernel page table isolation forced ON by KASLR May 7 23:56:45.885065 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 7 23:56:45.885072 kernel: CPU features: detected: ARM erratum 1418040 May 7 23:56:45.885078 kernel: CPU features: detected: SSBS not fully self-synchronizing May 7 23:56:45.885084 kernel: alternatives: applying boot alternatives May 7 23:56:45.885091 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:56:45.885098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 7 23:56:45.885105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 7 23:56:45.885111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 7 23:56:45.885117 kernel: Fallback order for Node 0: 0 May 7 23:56:45.885125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 7 23:56:45.885132 kernel: Policy zone: DMA May 7 23:56:45.885138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 7 23:56:45.885144 kernel: software IO TLB: area num 4. May 7 23:56:45.885151 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 7 23:56:45.885158 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 7 23:56:45.885164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 7 23:56:45.885171 kernel: rcu: Preemptible hierarchical RCU implementation. May 7 23:56:45.885177 kernel: rcu: RCU event tracing is enabled. May 7 23:56:45.885184 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 7 23:56:45.885191 kernel: Trampoline variant of Tasks RCU enabled. May 7 23:56:45.885197 kernel: Tracing variant of Tasks RCU enabled. May 7 23:56:45.885206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 7 23:56:45.885212 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 7 23:56:45.885219 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 7 23:56:45.885225 kernel: GICv3: 256 SPIs implemented May 7 23:56:45.885231 kernel: GICv3: 0 Extended SPIs implemented May 7 23:56:45.885237 kernel: Root IRQ handler: gic_handle_irq May 7 23:56:45.885244 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 7 23:56:45.885250 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 7 23:56:45.885257 kernel: ITS [mem 0x08080000-0x0809ffff] May 7 23:56:45.885263 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 7 23:56:45.885270 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 7 23:56:45.885277 kernel: GICv3: using LPI property table @0x00000000400f0000 May 7 23:56:45.885284 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 7 23:56:45.885290 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 7 23:56:45.885297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:56:45.885303 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 7 23:56:45.885310 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 7 23:56:45.885317 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 7 23:56:45.885323 kernel: arm-pv: using stolen time PV May 7 23:56:45.885330 kernel: Console: colour dummy device 80x25 May 7 23:56:45.885345 kernel: ACPI: Core revision 20230628 May 7 23:56:45.885371 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 7 23:56:45.885381 kernel: pid_max: default: 32768 minimum: 301 May 7 23:56:45.885388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 7 23:56:45.885394 kernel: landlock: Up and running. May 7 23:56:45.885401 kernel: SELinux: Initializing. May 7 23:56:45.885407 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:56:45.885414 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:56:45.885421 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:56:45.885427 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:56:45.885434 kernel: rcu: Hierarchical SRCU implementation. May 7 23:56:45.885442 kernel: rcu: Max phase no-delay instances is 400. May 7 23:56:45.885448 kernel: Platform MSI: ITS@0x8080000 domain created May 7 23:56:45.885455 kernel: PCI/MSI: ITS@0x8080000 domain created May 7 23:56:45.885462 kernel: Remapping and enabling EFI services. May 7 23:56:45.885468 kernel: smp: Bringing up secondary CPUs ... May 7 23:56:45.885475 kernel: Detected PIPT I-cache on CPU1 May 7 23:56:45.885481 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 7 23:56:45.885488 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 7 23:56:45.885495 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:56:45.885502 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 7 23:56:45.885509 kernel: Detected PIPT I-cache on CPU2 May 7 23:56:45.885520 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 7 23:56:45.885529 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 7 23:56:45.885536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:56:45.885542 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 7 23:56:45.885549 kernel: Detected PIPT I-cache on CPU3 May 7 23:56:45.885556 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 7 23:56:45.885563 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 7 23:56:45.885577 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:56:45.885584 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 7 23:56:45.885591 kernel: smp: Brought up 1 node, 4 CPUs May 7 23:56:45.885598 kernel: SMP: Total of 4 processors activated. May 7 23:56:45.885605 kernel: CPU features: detected: 32-bit EL0 Support May 7 23:56:45.885612 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 7 23:56:45.885619 kernel: CPU features: detected: Common not Private translations May 7 23:56:45.885626 kernel: CPU features: detected: CRC32 instructions May 7 23:56:45.885634 kernel: CPU features: detected: Enhanced Virtualization Traps May 7 23:56:45.885641 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 7 23:56:45.885647 kernel: CPU features: detected: LSE atomic instructions May 7 23:56:45.885654 kernel: CPU features: detected: Privileged Access Never May 7 23:56:45.885661 kernel: CPU features: detected: RAS Extension Support May 7 23:56:45.885669 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 7 23:56:45.885676 kernel: CPU: All CPU(s) started at EL1 May 7 23:56:45.885682 kernel: alternatives: applying system-wide alternatives May 7 23:56:45.885689 kernel: devtmpfs: initialized May 7 23:56:45.885698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 7 23:56:45.885705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 7 23:56:45.885712 kernel: pinctrl core: initialized pinctrl subsystem May 7 23:56:45.885718 kernel: SMBIOS 3.0.0 present. May 7 23:56:45.885725 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 7 23:56:45.885732 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 7 23:56:45.885739 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 7 23:56:45.885746 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 7 23:56:45.885753 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 7 23:56:45.885761 kernel: audit: initializing netlink subsys (disabled) May 7 23:56:45.885768 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 May 7 23:56:45.885775 kernel: thermal_sys: Registered thermal governor 'step_wise' May 7 23:56:45.885782 kernel: cpuidle: using governor menu May 7 23:56:45.885789 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 7 23:56:45.885797 kernel: ASID allocator initialised with 32768 entries May 7 23:56:45.885803 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 7 23:56:45.885810 kernel: Serial: AMBA PL011 UART driver May 7 23:56:45.885817 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 7 23:56:45.885826 kernel: Modules: 0 pages in range for non-PLT usage May 7 23:56:45.885833 kernel: Modules: 509264 pages in range for PLT usage May 7 23:56:45.885839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 7 23:56:45.885846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 7 23:56:45.885853 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 7 23:56:45.885860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 7 23:56:45.885867 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 7 23:56:45.885874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 7 23:56:45.885881 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 7 23:56:45.885889 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 7 23:56:45.885896 kernel: ACPI: Added _OSI(Module Device) May 7 23:56:45.885903 kernel: ACPI: Added _OSI(Processor Device) May 7 23:56:45.885910 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 7 23:56:45.885917 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 7 23:56:45.885924 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 7 23:56:45.885931 kernel: ACPI: Interpreter enabled May 7 23:56:45.885937 kernel: ACPI: Using GIC for interrupt routing May 7 23:56:45.885944 kernel: ACPI: MCFG table detected, 1 entries May 7 23:56:45.885951 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 7 23:56:45.885959 kernel: printk: console [ttyAMA0] enabled May 7 23:56:45.885966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 7 23:56:45.886102 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 7 23:56:45.886174 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 7 23:56:45.886254 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 7 23:56:45.886317 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 7 23:56:45.886411 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 7 23:56:45.886425 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 7 23:56:45.886432 kernel: PCI host bridge to bus 0000:00 May 7 23:56:45.886505 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 7 23:56:45.886564 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 7 23:56:45.886623 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 7 23:56:45.886679 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 7 23:56:45.886756 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 7 23:56:45.886832 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 7 23:56:45.886897 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 7 23:56:45.886960 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 7 23:56:45.887023 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:56:45.887085 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:56:45.887147 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 7 23:56:45.887213 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 7 23:56:45.887269 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 7 23:56:45.887325 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 7 23:56:45.887412 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 7 23:56:45.887423 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 7 23:56:45.887430 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 7 23:56:45.887437 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 7 23:56:45.887444 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 7 23:56:45.887454 kernel: iommu: Default domain type: Translated May 7 23:56:45.887461 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 7 23:56:45.887468 kernel: efivars: Registered efivars operations May 7 23:56:45.887474 kernel: vgaarb: loaded May 7 23:56:45.887481 kernel: clocksource: Switched to clocksource arch_sys_counter May 7 23:56:45.887488 kernel: VFS: Disk quotas dquot_6.6.0 May 7 23:56:45.887495 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 7 23:56:45.887502 kernel: pnp: PnP ACPI init May 7 23:56:45.887571 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 7 23:56:45.887583 kernel: pnp: PnP ACPI: found 1 devices May 7 23:56:45.887590 kernel: NET: Registered PF_INET protocol family May 7 23:56:45.887597 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 7 23:56:45.887605 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 7 23:56:45.887612 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 7 23:56:45.887619 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 7 23:56:45.887626 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 7 23:56:45.887633 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 7 23:56:45.887641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:56:45.887648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:56:45.887655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 7 23:56:45.887662 kernel: PCI: CLS 0 bytes, default 64 May 7 23:56:45.887669 kernel: kvm [1]: HYP mode not available May 7 23:56:45.887675 kernel: Initialise system trusted keyrings May 7 23:56:45.887682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 7 23:56:45.887689 kernel: Key type asymmetric registered May 7 23:56:45.887696 kernel: Asymmetric key parser 'x509' registered May 7 23:56:45.887704 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 7 23:56:45.887711 kernel: io scheduler mq-deadline registered May 7 23:56:45.887718 kernel: io scheduler kyber registered May 7 23:56:45.887724 kernel: io scheduler bfq registered May 7 23:56:45.887731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 7 23:56:45.887739 kernel: ACPI: button: Power Button [PWRB] May 7 23:56:45.887746 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 7 23:56:45.887809 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 7 23:56:45.887819 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 7 23:56:45.887828 kernel: thunder_xcv, ver 1.0 May 7 23:56:45.887835 kernel: thunder_bgx, ver 1.0 May 7 23:56:45.887842 kernel: nicpf, ver 1.0 May 7 23:56:45.887848 kernel: nicvf, ver 1.0 May 7 23:56:45.887919 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 7 23:56:45.887979 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-07T23:56:45 UTC (1746662205) May 7 23:56:45.887989 kernel: hid: raw HID events driver (C) Jiri Kosina May 7 23:56:45.887996 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 7 23:56:45.888003 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 7 23:56:45.888012 kernel: watchdog: Hard watchdog permanently disabled May 7 23:56:45.888019 kernel: NET: Registered PF_INET6 protocol family May 7 23:56:45.888025 kernel: Segment Routing with IPv6 May 7 23:56:45.888032 kernel: In-situ OAM (IOAM) with IPv6 May 7 23:56:45.888039 kernel: NET: Registered PF_PACKET protocol family May 7 23:56:45.888046 kernel: Key type dns_resolver registered May 7 23:56:45.888053 kernel: registered taskstats version 1 May 7 23:56:45.888060 kernel: Loading compiled-in X.509 certificates May 7 23:56:45.888067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 7 23:56:45.888075 kernel: Key type .fscrypt registered May 7 23:56:45.888082 kernel: Key type fscrypt-provisioning registered May 7 23:56:45.888089 kernel: ima: No TPM chip found, activating TPM-bypass! May 7 23:56:45.888095 kernel: ima: Allocated hash algorithm: sha1 May 7 23:56:45.888102 kernel: ima: No architecture policies found May 7 23:56:45.888109 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 7 23:56:45.888116 kernel: clk: Disabling unused clocks May 7 23:56:45.888123 kernel: Freeing unused kernel memory: 38336K May 7 23:56:45.888131 kernel: Run /init as init process May 7 23:56:45.888138 kernel: with arguments: May 7 23:56:45.888145 kernel: /init May 7 23:56:45.888151 kernel: with environment: May 7 23:56:45.888158 kernel: HOME=/ May 7 23:56:45.888165 kernel: TERM=linux May 7 23:56:45.888172 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 7 23:56:45.888179 systemd[1]: Successfully made /usr/ read-only. May 7 23:56:45.888189 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:56:45.888198 systemd[1]: Detected virtualization kvm. May 7 23:56:45.888206 systemd[1]: Detected architecture arm64. May 7 23:56:45.888213 systemd[1]: Running in initrd. May 7 23:56:45.888220 systemd[1]: No hostname configured, using default hostname. May 7 23:56:45.888227 systemd[1]: Hostname set to . May 7 23:56:45.888235 systemd[1]: Initializing machine ID from VM UUID. May 7 23:56:45.888259 systemd[1]: Queued start job for default target initrd.target. May 7 23:56:45.888270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:56:45.888278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:56:45.888293 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 7 23:56:45.888301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:56:45.888309 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 7 23:56:45.888317 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 7 23:56:45.888326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 7 23:56:45.888340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 7 23:56:45.888363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:56:45.888372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:56:45.888379 systemd[1]: Reached target paths.target - Path Units. May 7 23:56:45.888387 systemd[1]: Reached target slices.target - Slice Units. May 7 23:56:45.888394 systemd[1]: Reached target swap.target - Swaps. May 7 23:56:45.888401 systemd[1]: Reached target timers.target - Timer Units. May 7 23:56:45.888409 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:56:45.888416 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:56:45.888426 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 7 23:56:45.888433 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 7 23:56:45.888441 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:56:45.888449 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:56:45.888456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:56:45.888464 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:56:45.888471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 7 23:56:45.888479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:56:45.888487 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 7 23:56:45.888495 systemd[1]: Starting systemd-fsck-usr.service... May 7 23:56:45.888502 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:56:45.888510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:56:45.888517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:56:45.888525 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 7 23:56:45.888532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:56:45.888542 systemd[1]: Finished systemd-fsck-usr.service. May 7 23:56:45.888549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:56:45.888573 systemd-journald[238]: Collecting audit messages is disabled. May 7 23:56:45.888593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:56:45.888601 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:56:45.888609 systemd-journald[238]: Journal started May 7 23:56:45.888628 systemd-journald[238]: Runtime Journal (/run/log/journal/a38765cf09ed44d08598beb1377cf72b) is 5.9M, max 47.3M, 41.4M free. May 7 23:56:45.879994 systemd-modules-load[240]: Inserted module 'overlay' May 7 23:56:45.891970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 7 23:56:45.895282 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:56:45.895967 systemd-modules-load[240]: Inserted module 'br_netfilter' May 7 23:56:45.896853 kernel: Bridge firewalling registered May 7 23:56:45.896325 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:56:45.898625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:56:45.903406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:56:45.915509 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 7 23:56:45.916931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:56:45.919318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:56:45.921734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:56:45.925238 dracut-cmdline[264]: dracut-dracut-053 May 7 23:56:45.927746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:56:45.930308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:56:45.932637 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:56:45.936205 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:56:45.939491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:56:45.974592 systemd-resolved[300]: Positive Trust Anchors: May 7 23:56:45.974604 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:56:45.974634 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:56:45.979235 systemd-resolved[300]: Defaulting to hostname 'linux'. May 7 23:56:45.980177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:56:45.984740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:56:46.005376 kernel: SCSI subsystem initialized May 7 23:56:46.010366 kernel: Loading iSCSI transport class v2.0-870. May 7 23:56:46.017388 kernel: iscsi: registered transport (tcp) May 7 23:56:46.031480 kernel: iscsi: registered transport (qla4xxx) May 7 23:56:46.031501 kernel: QLogic iSCSI HBA Driver May 7 23:56:46.068959 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 7 23:56:46.081529 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 7 23:56:46.097532 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 7 23:56:46.097581 kernel: device-mapper: uevent: version 1.0.3 May 7 23:56:46.097600 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 7 23:56:46.144373 kernel: raid6: neonx8 gen() 15763 MB/s May 7 23:56:46.161371 kernel: raid6: neonx4 gen() 15801 MB/s May 7 23:56:46.178365 kernel: raid6: neonx2 gen() 13177 MB/s May 7 23:56:46.195368 kernel: raid6: neonx1 gen() 10513 MB/s May 7 23:56:46.212361 kernel: raid6: int64x8 gen() 6780 MB/s May 7 23:56:46.229368 kernel: raid6: int64x4 gen() 7337 MB/s May 7 23:56:46.246364 kernel: raid6: int64x2 gen() 6096 MB/s May 7 23:56:46.263426 kernel: raid6: int64x1 gen() 5053 MB/s May 7 23:56:46.263443 kernel: raid6: using algorithm neonx4 gen() 15801 MB/s May 7 23:56:46.281423 kernel: raid6: .... xor() 12404 MB/s, rmw enabled May 7 23:56:46.281452 kernel: raid6: using neon recovery algorithm May 7 23:56:46.286529 kernel: xor: measuring software checksum speed May 7 23:56:46.286558 kernel: 8regs : 21134 MB/sec May 7 23:56:46.287769 kernel: 32regs : 21693 MB/sec May 7 23:56:46.287785 kernel: arm64_neon : 27785 MB/sec May 7 23:56:46.287794 kernel: xor: using function: arm64_neon (27785 MB/sec) May 7 23:56:46.338373 kernel: Btrfs loaded, zoned=no, fsverity=no May 7 23:56:46.348060 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 7 23:56:46.359532 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:56:46.371970 systemd-udevd[465]: Using default interface naming scheme 'v255'. May 7 23:56:46.375571 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:56:46.382577 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 7 23:56:46.393854 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation May 7 23:56:46.417305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:56:46.424546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:56:46.464575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:56:46.474498 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 7 23:56:46.484607 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 7 23:56:46.486709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:56:46.488793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:56:46.491648 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:56:46.500502 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 7 23:56:46.509421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 7 23:56:46.512671 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 7 23:56:46.524595 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 7 23:56:46.524693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 7 23:56:46.524710 kernel: GPT:9289727 != 19775487 May 7 23:56:46.524719 kernel: GPT:Alternate GPT header not at the end of the disk. May 7 23:56:46.524729 kernel: GPT:9289727 != 19775487 May 7 23:56:46.524738 kernel: GPT: Use GNU Parted to correct GPT errors. May 7 23:56:46.524746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:56:46.525102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:56:46.525218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:56:46.528513 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:56:46.529568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:56:46.529693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:56:46.532762 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:56:46.540531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:56:46.551010 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (513) May 7 23:56:46.551522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:56:46.553394 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) May 7 23:56:46.569782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 7 23:56:46.577205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 7 23:56:46.583376 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 7 23:56:46.584520 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 7 23:56:46.593670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:56:46.606463 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 7 23:56:46.608104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:56:46.611794 disk-uuid[554]: Primary Header is updated. May 7 23:56:46.611794 disk-uuid[554]: Secondary Entries is updated. May 7 23:56:46.611794 disk-uuid[554]: Secondary Header is updated. May 7 23:56:46.615359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:56:46.635318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:56:47.628321 disk-uuid[555]: The operation has completed successfully. May 7 23:56:47.629583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:56:47.653687 systemd[1]: disk-uuid.service: Deactivated successfully. May 7 23:56:47.653780 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 7 23:56:47.690488 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 7 23:56:47.692915 sh[577]: Success May 7 23:56:47.707919 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 7 23:56:47.745519 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 7 23:56:47.746670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 7 23:56:47.749495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 7 23:56:47.760711 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 7 23:56:47.760742 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 7 23:56:47.760752 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 7 23:56:47.762516 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 7 23:56:47.762537 kernel: BTRFS info (device dm-0): using free space tree May 7 23:56:47.766596 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 7 23:56:47.767863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 7 23:56:47.780520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 7 23:56:47.781980 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 7 23:56:47.796193 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:56:47.796234 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:56:47.796245 kernel: BTRFS info (device vda6): using free space tree May 7 23:56:47.798379 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:56:47.802389 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:56:47.805021 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 7 23:56:47.810515 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 7 23:56:47.871766 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:56:47.881545 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:56:47.906074 ignition[664]: Ignition 2.20.0 May 7 23:56:47.906084 ignition[664]: Stage: fetch-offline May 7 23:56:47.906124 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 7 23:56:47.906132 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:47.906346 ignition[664]: parsed url from cmdline: "" May 7 23:56:47.906357 ignition[664]: no config URL provided May 7 23:56:47.906362 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:56:47.906369 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 7 23:56:47.906390 ignition[664]: op(1): [started] loading QEMU firmware config module May 7 23:56:47.906395 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 7 23:56:47.915874 ignition[664]: op(1): [finished] loading QEMU firmware config module May 7 23:56:47.916060 systemd-networkd[765]: lo: Link UP May 7 23:56:47.916063 systemd-networkd[765]: lo: Gained carrier May 7 23:56:47.916859 systemd-networkd[765]: Enumeration completed May 7 23:56:47.917137 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:56:47.917244 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:56:47.917247 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:56:47.917912 systemd-networkd[765]: eth0: Link UP May 7 23:56:47.917914 systemd-networkd[765]: eth0: Gained carrier May 7 23:56:47.917921 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:56:47.919055 systemd[1]: Reached target network.target - Network. May 7 23:56:47.934401 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:56:47.946683 ignition[664]: parsing config with SHA512: 35a6cadc8f917e028e6accff89f387365e791b6d9ab1a8492c0cf0ea3c85cf569cc02275fc408cafb08137b8a9713100d690790df968427c76bdd5658673e32d May 7 23:56:47.950949 unknown[664]: fetched base config from "system" May 7 23:56:47.950958 unknown[664]: fetched user config from "qemu" May 7 23:56:47.951344 ignition[664]: fetch-offline: fetch-offline passed May 7 23:56:47.953099 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:56:47.951441 ignition[664]: Ignition finished successfully May 7 23:56:47.954436 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 7 23:56:47.960488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 7 23:56:47.972134 ignition[773]: Ignition 2.20.0 May 7 23:56:47.972144 ignition[773]: Stage: kargs May 7 23:56:47.972298 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 7 23:56:47.972308 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:47.973155 ignition[773]: kargs: kargs passed May 7 23:56:47.975747 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 7 23:56:47.973193 ignition[773]: Ignition finished successfully May 7 23:56:47.987547 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 7 23:56:47.996187 ignition[783]: Ignition 2.20.0 May 7 23:56:47.996196 ignition[783]: Stage: disks May 7 23:56:47.996378 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 7 23:56:47.998938 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 7 23:56:47.996388 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:48.000093 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 7 23:56:47.997153 ignition[783]: disks: disks passed May 7 23:56:48.002478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 7 23:56:47.997190 ignition[783]: Ignition finished successfully May 7 23:56:48.004434 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:56:48.006239 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:56:48.007684 systemd[1]: Reached target basic.target - Basic System. May 7 23:56:48.020524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 7 23:56:48.029115 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 7 23:56:48.032284 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 7 23:56:48.034310 systemd[1]: Mounting sysroot.mount - /sysroot... May 7 23:56:48.081371 kernel: EXT4-fs (vda9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 7 23:56:48.081597 systemd[1]: Mounted sysroot.mount - /sysroot. May 7 23:56:48.082739 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 7 23:56:48.094425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:56:48.096640 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 7 23:56:48.097715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 7 23:56:48.097756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 7 23:56:48.097777 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:56:48.103254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 7 23:56:48.104975 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 7 23:56:48.110380 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) May 7 23:56:48.112904 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:56:48.112919 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:56:48.112928 kernel: BTRFS info (device vda6): using free space tree May 7 23:56:48.115384 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:56:48.117168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:56:48.145166 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory May 7 23:56:48.149268 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory May 7 23:56:48.153173 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory May 7 23:56:48.157082 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory May 7 23:56:48.222780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 7 23:56:48.238472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 7 23:56:48.240519 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 7 23:56:48.246395 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:56:48.259271 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 7 23:56:48.262483 ignition[917]: INFO : Ignition 2.20.0 May 7 23:56:48.262483 ignition[917]: INFO : Stage: mount May 7 23:56:48.263991 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:56:48.263991 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:48.263991 ignition[917]: INFO : mount: mount passed May 7 23:56:48.263991 ignition[917]: INFO : Ignition finished successfully May 7 23:56:48.266025 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 7 23:56:48.274439 systemd[1]: Starting ignition-files.service - Ignition (files)... May 7 23:56:48.884191 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 7 23:56:48.896500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:56:48.903193 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) May 7 23:56:48.903227 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:56:48.903237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:56:48.904761 kernel: BTRFS info (device vda6): using free space tree May 7 23:56:48.907372 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:56:48.907924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:56:48.930206 ignition[947]: INFO : Ignition 2.20.0 May 7 23:56:48.930206 ignition[947]: INFO : Stage: files May 7 23:56:48.931787 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:56:48.931787 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:48.931787 ignition[947]: DEBUG : files: compiled without relabeling support, skipping May 7 23:56:48.935100 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 7 23:56:48.935100 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 7 23:56:48.938091 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 7 23:56:48.939434 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 7 23:56:48.939434 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 7 23:56:48.938656 unknown[947]: wrote ssh authorized keys file for user: core May 7 23:56:48.943168 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 7 23:56:48.943168 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 7 23:56:49.328517 systemd-networkd[765]: eth0: Gained IPv6LL May 7 23:56:49.610044 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 7 23:56:50.481571 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:56:50.483871 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 7 23:56:50.816077 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 7 23:56:51.138989 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 7 23:56:51.138989 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 7 23:56:51.142613 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 7 23:56:51.155972 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:56:51.158627 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:56:51.160192 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 7 23:56:51.160192 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 7 23:56:51.160192 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 7 23:56:51.160192 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 7 23:56:51.160192 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 7 23:56:51.160192 ignition[947]: INFO : files: files passed May 7 23:56:51.160192 ignition[947]: INFO : Ignition finished successfully May 7 23:56:51.163615 systemd[1]: Finished ignition-files.service - Ignition (files). May 7 23:56:51.178565 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 7 23:56:51.180788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 7 23:56:51.182745 systemd[1]: ignition-quench.service: Deactivated successfully. May 7 23:56:51.182821 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 7 23:56:51.187286 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory May 7 23:56:51.189145 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:56:51.189145 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 7 23:56:51.192917 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:56:51.193202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:56:51.195679 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 7 23:56:51.205724 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 7 23:56:51.227290 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 7 23:56:51.227401 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 7 23:56:51.229537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 7 23:56:51.231401 systemd[1]: Reached target initrd.target - Initrd Default Target. May 7 23:56:51.233169 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 7 23:56:51.233829 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 7 23:56:51.248197 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:56:51.259536 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 7 23:56:51.266354 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 7 23:56:51.267500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:56:51.269476 systemd[1]: Stopped target timers.target - Timer Units. May 7 23:56:51.271261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 7 23:56:51.271384 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:56:51.273781 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 7 23:56:51.274793 systemd[1]: Stopped target basic.target - Basic System. May 7 23:56:51.276539 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 7 23:56:51.278275 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:56:51.280008 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 7 23:56:51.281849 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 7 23:56:51.283646 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:56:51.285582 systemd[1]: Stopped target sysinit.target - System Initialization. May 7 23:56:51.287269 systemd[1]: Stopped target local-fs.target - Local File Systems. May 7 23:56:51.289345 systemd[1]: Stopped target swap.target - Swaps. May 7 23:56:51.290863 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 7 23:56:51.290975 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 7 23:56:51.293201 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 7 23:56:51.295054 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:56:51.296901 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 7 23:56:51.301410 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:56:51.302583 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 7 23:56:51.302688 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 7 23:56:51.305407 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 7 23:56:51.305553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:56:51.307426 systemd[1]: Stopped target paths.target - Path Units. May 7 23:56:51.308959 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 7 23:56:51.313428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:56:51.314654 systemd[1]: Stopped target slices.target - Slice Units. May 7 23:56:51.316683 systemd[1]: Stopped target sockets.target - Socket Units. May 7 23:56:51.318192 systemd[1]: iscsid.socket: Deactivated successfully. May 7 23:56:51.318276 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:56:51.319808 systemd[1]: iscsiuio.socket: Deactivated successfully. May 7 23:56:51.319883 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:56:51.321455 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 7 23:56:51.321553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:56:51.323422 systemd[1]: ignition-files.service: Deactivated successfully. May 7 23:56:51.323517 systemd[1]: Stopped ignition-files.service - Ignition (files). May 7 23:56:51.340501 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 7 23:56:51.341340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 7 23:56:51.341475 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:56:51.346558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 7 23:56:51.347603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 7 23:56:51.352078 ignition[1002]: INFO : Ignition 2.20.0 May 7 23:56:51.352078 ignition[1002]: INFO : Stage: umount May 7 23:56:51.352078 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:56:51.352078 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:56:51.352078 ignition[1002]: INFO : umount: umount passed May 7 23:56:51.352078 ignition[1002]: INFO : Ignition finished successfully May 7 23:56:51.348490 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:56:51.351261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 7 23:56:51.351382 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:56:51.354599 systemd[1]: ignition-mount.service: Deactivated successfully. May 7 23:56:51.354675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 7 23:56:51.357918 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 7 23:56:51.360201 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 7 23:56:51.360292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 7 23:56:51.362907 systemd[1]: Stopped target network.target - Network. May 7 23:56:51.365238 systemd[1]: ignition-disks.service: Deactivated successfully. May 7 23:56:51.365312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 7 23:56:51.367120 systemd[1]: ignition-kargs.service: Deactivated successfully. May 7 23:56:51.367168 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 7 23:56:51.368792 systemd[1]: ignition-setup.service: Deactivated successfully. May 7 23:56:51.368838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 7 23:56:51.370499 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 7 23:56:51.370543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 7 23:56:51.372551 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 7 23:56:51.374169 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 7 23:56:51.379299 systemd[1]: systemd-resolved.service: Deactivated successfully. May 7 23:56:51.379435 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 7 23:56:51.382467 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 7 23:56:51.382713 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 7 23:56:51.382750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:56:51.386219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 7 23:56:51.387273 systemd[1]: systemd-networkd.service: Deactivated successfully. May 7 23:56:51.387471 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 7 23:56:51.390149 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 7 23:56:51.390291 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 7 23:56:51.390330 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 7 23:56:51.404462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 7 23:56:51.405304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 7 23:56:51.405397 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:56:51.407409 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:56:51.407454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:56:51.411273 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 7 23:56:51.411327 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 7 23:56:51.412677 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:56:51.415236 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:56:51.423302 systemd[1]: network-cleanup.service: Deactivated successfully. May 7 23:56:51.423438 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 7 23:56:51.425546 systemd[1]: sysroot-boot.service: Deactivated successfully. May 7 23:56:51.425624 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 7 23:56:51.427527 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 7 23:56:51.427605 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 7 23:56:51.436052 systemd[1]: systemd-udevd.service: Deactivated successfully. May 7 23:56:51.436185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:56:51.438283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 7 23:56:51.438329 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 7 23:56:51.440227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 7 23:56:51.440262 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:56:51.441990 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 7 23:56:51.442037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 7 23:56:51.444616 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 7 23:56:51.444660 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 7 23:56:51.447148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:56:51.447189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:56:51.459574 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 7 23:56:51.460596 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 7 23:56:51.460657 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:56:51.463620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:56:51.463663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:56:51.466974 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 7 23:56:51.467048 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 7 23:56:51.469224 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 7 23:56:51.471402 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 7 23:56:51.479922 systemd[1]: Switching root. May 7 23:56:51.502496 systemd-journald[238]: Journal stopped May 7 23:56:52.228294 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 7 23:56:52.228371 kernel: SELinux: policy capability network_peer_controls=1 May 7 23:56:52.228384 kernel: SELinux: policy capability open_perms=1 May 7 23:56:52.228394 kernel: SELinux: policy capability extended_socket_class=1 May 7 23:56:52.228403 kernel: SELinux: policy capability always_check_network=0 May 7 23:56:52.228415 kernel: SELinux: policy capability cgroup_seclabel=1 May 7 23:56:52.228425 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 7 23:56:52.228434 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 7 23:56:52.228443 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 7 23:56:52.228456 kernel: audit: type=1403 audit(1746662211.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 7 23:56:52.228466 systemd[1]: Successfully loaded SELinux policy in 34.068ms. May 7 23:56:52.228486 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.775ms. May 7 23:56:52.228497 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:56:52.228509 systemd[1]: Detected virtualization kvm. May 7 23:56:52.228521 systemd[1]: Detected architecture arm64. May 7 23:56:52.228531 systemd[1]: Detected first boot. May 7 23:56:52.228542 systemd[1]: Initializing machine ID from VM UUID. May 7 23:56:52.228552 zram_generator::config[1049]: No configuration found. May 7 23:56:52.228563 kernel: NET: Registered PF_VSOCK protocol family May 7 23:56:52.228578 systemd[1]: Populated /etc with preset unit settings. May 7 23:56:52.228588 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 7 23:56:52.228598 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 7 23:56:52.228610 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 7 23:56:52.228620 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 7 23:56:52.228632 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 7 23:56:52.228642 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 7 23:56:52.228652 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 7 23:56:52.228662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 7 23:56:52.228675 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 7 23:56:52.228686 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 7 23:56:52.228696 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 7 23:56:52.228708 systemd[1]: Created slice user.slice - User and Session Slice. May 7 23:56:52.228718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:56:52.228728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:56:52.228740 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 7 23:56:52.228750 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 7 23:56:52.228760 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 7 23:56:52.228770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:56:52.228780 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 7 23:56:52.228791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:56:52.228803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 7 23:56:52.228813 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 7 23:56:52.228823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 7 23:56:52.228834 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 7 23:56:52.228844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:56:52.228854 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:56:52.228864 systemd[1]: Reached target slices.target - Slice Units. May 7 23:56:52.228874 systemd[1]: Reached target swap.target - Swaps. May 7 23:56:52.228886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 7 23:56:52.228896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 7 23:56:52.228906 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 7 23:56:52.228917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:56:52.228933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:56:52.228943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:56:52.228953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 7 23:56:52.228964 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 7 23:56:52.228974 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 7 23:56:52.228985 systemd[1]: Mounting media.mount - External Media Directory... May 7 23:56:52.228996 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 7 23:56:52.229005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 7 23:56:52.229016 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 7 23:56:52.229026 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 7 23:56:52.229036 systemd[1]: Reached target machines.target - Containers. May 7 23:56:52.229047 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 7 23:56:52.229057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:56:52.229069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:56:52.229079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 7 23:56:52.229089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:56:52.229099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:56:52.229109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:56:52.229119 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 7 23:56:52.229129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:56:52.229139 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 7 23:56:52.229151 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 7 23:56:52.229160 kernel: fuse: init (API version 7.39) May 7 23:56:52.229170 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 7 23:56:52.229181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 7 23:56:52.229191 systemd[1]: Stopped systemd-fsck-usr.service. May 7 23:56:52.229202 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:56:52.229212 kernel: loop: module loaded May 7 23:56:52.229221 kernel: ACPI: bus type drm_connector registered May 7 23:56:52.229230 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:56:52.229241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:56:52.229252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 7 23:56:52.229262 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 7 23:56:52.229288 systemd-journald[1124]: Collecting audit messages is disabled. May 7 23:56:52.229310 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 7 23:56:52.229326 systemd-journald[1124]: Journal started May 7 23:56:52.229358 systemd-journald[1124]: Runtime Journal (/run/log/journal/a38765cf09ed44d08598beb1377cf72b) is 5.9M, max 47.3M, 41.4M free. May 7 23:56:52.033267 systemd[1]: Queued start job for default target multi-user.target. May 7 23:56:52.045147 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 7 23:56:52.045554 systemd[1]: systemd-journald.service: Deactivated successfully. May 7 23:56:52.232388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:56:52.233948 systemd[1]: verity-setup.service: Deactivated successfully. May 7 23:56:52.233991 systemd[1]: Stopped verity-setup.service. May 7 23:56:52.238840 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:56:52.239472 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 7 23:56:52.240627 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 7 23:56:52.241851 systemd[1]: Mounted media.mount - External Media Directory. May 7 23:56:52.242960 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 7 23:56:52.244142 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 7 23:56:52.245386 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 7 23:56:52.248380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 7 23:56:52.249753 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:56:52.251268 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 7 23:56:52.251468 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 7 23:56:52.252847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:56:52.253013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:56:52.254455 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:56:52.254638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:56:52.255928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:56:52.256090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:56:52.257592 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 7 23:56:52.257748 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 7 23:56:52.259710 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:56:52.259896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:56:52.263439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:56:52.264823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 7 23:56:52.266405 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 7 23:56:52.267946 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 7 23:56:52.281842 systemd[1]: Reached target network-pre.target - Preparation for Network. May 7 23:56:52.292444 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 7 23:56:52.294464 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 7 23:56:52.295589 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 7 23:56:52.295625 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:56:52.297551 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 7 23:56:52.299697 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 7 23:56:52.301708 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 7 23:56:52.302821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:56:52.304108 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 7 23:56:52.306001 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 7 23:56:52.307293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:56:52.310515 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 7 23:56:52.311617 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:56:52.314572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:56:52.318108 systemd-journald[1124]: Time spent on flushing to /var/log/journal/a38765cf09ed44d08598beb1377cf72b is 25.635ms for 864 entries. May 7 23:56:52.318108 systemd-journald[1124]: System Journal (/var/log/journal/a38765cf09ed44d08598beb1377cf72b) is 8M, max 195.6M, 187.6M free. May 7 23:56:52.348363 systemd-journald[1124]: Received client request to flush runtime journal. May 7 23:56:52.348399 kernel: loop0: detected capacity change from 0 to 123192 May 7 23:56:52.320543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 7 23:56:52.323274 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 7 23:56:52.326135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:56:52.328783 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 7 23:56:52.330078 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 7 23:56:52.334370 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 7 23:56:52.336743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 7 23:56:52.344871 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 7 23:56:52.359014 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 7 23:56:52.364174 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 7 23:56:52.367433 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 7 23:56:52.369305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 7 23:56:52.371879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:56:52.377025 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 7 23:56:52.387080 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 7 23:56:52.396113 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 7 23:56:52.403380 kernel: loop1: detected capacity change from 0 to 113512 May 7 23:56:52.404663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:56:52.423094 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 7 23:56:52.423113 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 7 23:56:52.427089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:56:52.427367 kernel: loop2: detected capacity change from 0 to 201592 May 7 23:56:52.472378 kernel: loop3: detected capacity change from 0 to 123192 May 7 23:56:52.478373 kernel: loop4: detected capacity change from 0 to 113512 May 7 23:56:52.484373 kernel: loop5: detected capacity change from 0 to 201592 May 7 23:56:52.489822 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 7 23:56:52.490212 (sd-merge)[1191]: Merged extensions into '/usr'. May 7 23:56:52.493549 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... May 7 23:56:52.493570 systemd[1]: Reloading... May 7 23:56:52.545340 zram_generator::config[1216]: No configuration found. May 7 23:56:52.590764 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 7 23:56:52.636622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:56:52.685730 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 7 23:56:52.685980 systemd[1]: Reloading finished in 192 ms. May 7 23:56:52.710377 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 7 23:56:52.711786 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 7 23:56:52.728608 systemd[1]: Starting ensure-sysext.service... May 7 23:56:52.731066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:56:52.745560 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... May 7 23:56:52.745576 systemd[1]: Reloading... May 7 23:56:52.749025 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 7 23:56:52.749224 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 7 23:56:52.749882 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 7 23:56:52.750087 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 7 23:56:52.750138 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 7 23:56:52.752655 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:56:52.752669 systemd-tmpfiles[1254]: Skipping /boot May 7 23:56:52.761028 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:56:52.761047 systemd-tmpfiles[1254]: Skipping /boot May 7 23:56:52.789376 zram_generator::config[1284]: No configuration found. May 7 23:56:52.868493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:56:52.917532 systemd[1]: Reloading finished in 171 ms. May 7 23:56:52.932750 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 7 23:56:52.950405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:56:52.957840 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:56:52.960305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 7 23:56:52.962759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 7 23:56:52.965893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:56:52.971614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:56:52.976113 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 7 23:56:52.979813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:56:52.981391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:56:52.984464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:56:52.989069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:56:52.990309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:56:52.990459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:56:52.993736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 7 23:56:52.995940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:56:52.996121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:56:52.998884 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 7 23:56:53.007744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:56:53.009694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:56:53.010065 systemd-udevd[1324]: Using default interface naming scheme 'v255'. May 7 23:56:53.011469 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:56:53.011619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:56:53.020074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:56:53.029783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:56:53.034002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:56:53.040087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:56:53.041272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:56:53.041411 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:56:53.042578 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 7 23:56:53.044852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:56:53.049165 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 7 23:56:53.050904 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 7 23:56:53.052538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:56:53.052693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:56:53.054218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:56:53.054410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:56:53.064606 systemd[1]: Finished ensure-sysext.service. May 7 23:56:53.072578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:56:53.074063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:56:53.079523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:56:53.085384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:56:53.086375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1355) May 7 23:56:53.088109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:56:53.088160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:56:53.091576 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:56:53.096677 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 7 23:56:53.098257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 7 23:56:53.098770 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:56:53.100381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:56:53.101966 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 7 23:56:53.103465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:56:53.103624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:56:53.106272 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:56:53.106472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:56:53.108093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:56:53.108233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:56:53.112696 augenrules[1388]: No rules May 7 23:56:53.113365 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 7 23:56:53.115901 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:56:53.117401 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:56:53.129421 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 7 23:56:53.131297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:56:53.132005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:56:53.151953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:56:53.161534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 7 23:56:53.183794 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 7 23:56:53.203142 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 7 23:56:53.204782 systemd[1]: Reached target time-set.target - System Time Set. May 7 23:56:53.222143 systemd-networkd[1384]: lo: Link UP May 7 23:56:53.222151 systemd-networkd[1384]: lo: Gained carrier May 7 23:56:53.223293 systemd-networkd[1384]: Enumeration completed May 7 23:56:53.223453 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:56:53.227308 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:56:53.227325 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:56:53.229035 systemd-networkd[1384]: eth0: Link UP May 7 23:56:53.229045 systemd-networkd[1384]: eth0: Gained carrier May 7 23:56:53.229059 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:56:53.232600 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 7 23:56:53.235222 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 7 23:56:53.238699 systemd-resolved[1323]: Positive Trust Anchors: May 7 23:56:53.238712 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:56:53.238744 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:56:53.239418 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:56:53.239936 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. May 7 23:56:53.240454 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 7 23:56:53.240495 systemd-timesyncd[1386]: Initial clock synchronization to Wed 2025-05-07 23:56:53.529571 UTC. May 7 23:56:53.248881 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 7 23:56:53.250555 systemd-resolved[1323]: Defaulting to hostname 'linux'. May 7 23:56:53.258511 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:56:53.262835 systemd[1]: Reached target network.target - Network. May 7 23:56:53.263878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:56:53.273576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:56:53.282404 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 7 23:56:53.285414 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 7 23:56:53.299686 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:56:53.319747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:56:53.335909 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 7 23:56:53.337429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:56:53.339456 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:56:53.340597 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 7 23:56:53.341744 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 7 23:56:53.343082 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 7 23:56:53.344201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 7 23:56:53.345418 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 7 23:56:53.346573 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 7 23:56:53.346603 systemd[1]: Reached target paths.target - Path Units. May 7 23:56:53.347489 systemd[1]: Reached target timers.target - Timer Units. May 7 23:56:53.349230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 7 23:56:53.351497 systemd[1]: Starting docker.socket - Docker Socket for the API... May 7 23:56:53.354489 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 7 23:56:53.355827 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 7 23:56:53.357038 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 7 23:56:53.361191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 7 23:56:53.362679 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 7 23:56:53.364847 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 7 23:56:53.366447 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 7 23:56:53.367548 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:56:53.368464 systemd[1]: Reached target basic.target - Basic System. May 7 23:56:53.369406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 7 23:56:53.369433 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 7 23:56:53.370273 systemd[1]: Starting containerd.service - containerd container runtime... May 7 23:56:53.371867 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:56:53.373534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 7 23:56:53.376260 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 7 23:56:53.380518 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 7 23:56:53.382060 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 7 23:56:53.383000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 7 23:56:53.385438 jq[1434]: false May 7 23:56:53.386551 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 7 23:56:53.391137 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 7 23:56:53.394411 extend-filesystems[1435]: Found loop3 May 7 23:56:53.394411 extend-filesystems[1435]: Found loop4 May 7 23:56:53.394411 extend-filesystems[1435]: Found loop5 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda May 7 23:56:53.394411 extend-filesystems[1435]: Found vda1 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda2 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda3 May 7 23:56:53.394411 extend-filesystems[1435]: Found usr May 7 23:56:53.394411 extend-filesystems[1435]: Found vda4 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda6 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda7 May 7 23:56:53.394411 extend-filesystems[1435]: Found vda9 May 7 23:56:53.394411 extend-filesystems[1435]: Checking size of /dev/vda9 May 7 23:56:53.394618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 7 23:56:53.411608 dbus-daemon[1433]: [system] SELinux support is enabled May 7 23:56:53.401430 systemd[1]: Starting systemd-logind.service - User Login Management... May 7 23:56:53.403840 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 7 23:56:53.404239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 7 23:56:53.405187 systemd[1]: Starting update-engine.service - Update Engine... May 7 23:56:53.419746 jq[1451]: true May 7 23:56:53.408714 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 7 23:56:53.411737 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 7 23:56:53.413719 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 7 23:56:53.420664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 7 23:56:53.420836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 7 23:56:53.421141 systemd[1]: motdgen.service: Deactivated successfully. May 7 23:56:53.421347 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 7 23:56:53.426481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1364) May 7 23:56:53.426064 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 7 23:56:53.426242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 7 23:56:53.434779 extend-filesystems[1435]: Resized partition /dev/vda9 May 7 23:56:53.440415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 7 23:56:53.440451 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 7 23:56:53.444509 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 7 23:56:53.444534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 7 23:56:53.448917 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 7 23:56:53.455479 jq[1458]: true May 7 23:56:53.460897 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) May 7 23:56:53.468665 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 7 23:56:53.468712 tar[1457]: linux-arm64/LICENSE May 7 23:56:53.468712 tar[1457]: linux-arm64/helm May 7 23:56:53.468925 update_engine[1450]: I20250507 23:56:53.464076 1450 main.cc:92] Flatcar Update Engine starting May 7 23:56:53.470731 systemd[1]: Started update-engine.service - Update Engine. May 7 23:56:53.471642 update_engine[1450]: I20250507 23:56:53.471511 1450 update_check_scheduler.cc:74] Next update check in 7m16s May 7 23:56:53.480654 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 7 23:56:53.496382 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 7 23:56:53.531046 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 7 23:56:53.531046 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 May 7 23:56:53.531046 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 7 23:56:53.537637 extend-filesystems[1435]: Resized filesystem in /dev/vda9 May 7 23:56:53.532333 systemd[1]: extend-filesystems.service: Deactivated successfully. May 7 23:56:53.532565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 7 23:56:53.534630 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (Power Button) May 7 23:56:53.535051 systemd-logind[1448]: New seat seat0. May 7 23:56:53.539187 systemd[1]: Started systemd-logind.service - User Login Management. May 7 23:56:53.543856 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 7 23:56:53.545347 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 7 23:56:53.546742 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 7 23:56:53.547685 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 7 23:56:53.654982 containerd[1467]: time="2025-05-07T23:56:53.654823600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 7 23:56:53.682295 containerd[1467]: time="2025-05-07T23:56:53.682171320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.685374 containerd[1467]: time="2025-05-07T23:56:53.685321440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 7 23:56:53.685374 containerd[1467]: time="2025-05-07T23:56:53.685367600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 7 23:56:53.685453 containerd[1467]: time="2025-05-07T23:56:53.685386680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 7 23:56:53.685569 containerd[1467]: time="2025-05-07T23:56:53.685529600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 7 23:56:53.685569 containerd[1467]: time="2025-05-07T23:56:53.685554480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.685631 containerd[1467]: time="2025-05-07T23:56:53.685611600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:56:53.685631 containerd[1467]: time="2025-05-07T23:56:53.685627840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.685838 containerd[1467]: time="2025-05-07T23:56:53.685817840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:56:53.685877 containerd[1467]: time="2025-05-07T23:56:53.685838120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.685877 containerd[1467]: time="2025-05-07T23:56:53.685851360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:56:53.685877 containerd[1467]: time="2025-05-07T23:56:53.685859840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.685946 containerd[1467]: time="2025-05-07T23:56:53.685931760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.686145 containerd[1467]: time="2025-05-07T23:56:53.686125840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 7 23:56:53.686270 containerd[1467]: time="2025-05-07T23:56:53.686253200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:56:53.686315 containerd[1467]: time="2025-05-07T23:56:53.686270760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 7 23:56:53.686392 containerd[1467]: time="2025-05-07T23:56:53.686377000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 7 23:56:53.686442 containerd[1467]: time="2025-05-07T23:56:53.686429280Z" level=info msg="metadata content store policy set" policy=shared May 7 23:56:53.689334 containerd[1467]: time="2025-05-07T23:56:53.689289640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 7 23:56:53.689421 containerd[1467]: time="2025-05-07T23:56:53.689345520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 7 23:56:53.689421 containerd[1467]: time="2025-05-07T23:56:53.689379120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 7 23:56:53.689421 containerd[1467]: time="2025-05-07T23:56:53.689395480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 7 23:56:53.689421 containerd[1467]: time="2025-05-07T23:56:53.689409040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 7 23:56:53.689558 containerd[1467]: time="2025-05-07T23:56:53.689535840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 7 23:56:53.689754 containerd[1467]: time="2025-05-07T23:56:53.689737440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 7 23:56:53.689878 containerd[1467]: time="2025-05-07T23:56:53.689828000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 7 23:56:53.689878 containerd[1467]: time="2025-05-07T23:56:53.689851760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 7 23:56:53.689878 containerd[1467]: time="2025-05-07T23:56:53.689865400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 7 23:56:53.689878 containerd[1467]: time="2025-05-07T23:56:53.689877560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689890240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689901720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689913960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689928600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689940840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 7 23:56:53.689957 containerd[1467]: time="2025-05-07T23:56:53.689951640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.689961000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.689978760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.689990800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690001840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690013480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690024800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690036720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690046800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690056 containerd[1467]: time="2025-05-07T23:56:53.690058720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690071200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690085600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690096600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690107160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690117920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690131640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690150000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690162080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690199 containerd[1467]: time="2025-05-07T23:56:53.690173720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690342840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690379240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690390240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690401640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690412080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690424520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 7 23:56:53.690432 containerd[1467]: time="2025-05-07T23:56:53.690433800Z" level=info msg="NRI interface is disabled by configuration." May 7 23:56:53.690614 containerd[1467]: time="2025-05-07T23:56:53.690443640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 7 23:56:53.690808 containerd[1467]: time="2025-05-07T23:56:53.690762480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 7 23:56:53.690808 containerd[1467]: time="2025-05-07T23:56:53.690809720Z" level=info msg="Connect containerd service" May 7 23:56:53.690940 containerd[1467]: time="2025-05-07T23:56:53.690838320Z" level=info msg="using legacy CRI server" May 7 23:56:53.690940 containerd[1467]: time="2025-05-07T23:56:53.690845160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 7 23:56:53.691082 containerd[1467]: time="2025-05-07T23:56:53.691065760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 7 23:56:53.691689 containerd[1467]: time="2025-05-07T23:56:53.691661160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692099000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692139800Z" level=info msg=serving... address=/run/containerd/containerd.sock May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692212520Z" level=info msg="Start subscribing containerd event" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692261840Z" level=info msg="Start recovering state" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692334080Z" level=info msg="Start event monitor" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692344440Z" level=info msg="Start snapshots syncer" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692364560Z" level=info msg="Start cni network conf syncer for default" May 7 23:56:53.693701 containerd[1467]: time="2025-05-07T23:56:53.692373920Z" level=info msg="Start streaming server" May 7 23:56:53.692622 systemd[1]: Started containerd.service - containerd container runtime. May 7 23:56:53.696109 containerd[1467]: time="2025-05-07T23:56:53.694431960Z" level=info msg="containerd successfully booted in 0.042384s" May 7 23:56:53.883185 tar[1457]: linux-arm64/README.md May 7 23:56:53.895022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 7 23:56:53.897039 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 7 23:56:53.914444 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 7 23:56:53.924560 systemd[1]: Starting issuegen.service - Generate /run/issue... May 7 23:56:53.929502 systemd[1]: issuegen.service: Deactivated successfully. May 7 23:56:53.929699 systemd[1]: Finished issuegen.service - Generate /run/issue. May 7 23:56:53.932183 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 7 23:56:53.942412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 7 23:56:53.945047 systemd[1]: Started getty@tty1.service - Getty on tty1. May 7 23:56:53.947140 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 7 23:56:53.948504 systemd[1]: Reached target getty.target - Login Prompts. May 7 23:56:54.384618 systemd-networkd[1384]: eth0: Gained IPv6LL May 7 23:56:54.387122 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 7 23:56:54.388980 systemd[1]: Reached target network-online.target - Network is Online. May 7 23:56:54.397620 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 7 23:56:54.400160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:56:54.402407 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 7 23:56:54.416868 systemd[1]: coreos-metadata.service: Deactivated successfully. May 7 23:56:54.417145 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 7 23:56:54.418888 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 7 23:56:54.425473 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 7 23:56:54.926656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:56:54.928316 systemd[1]: Reached target multi-user.target - Multi-User System. May 7 23:56:54.929982 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:56:54.931308 systemd[1]: Startup finished in 603ms (kernel) + 5.932s (initrd) + 3.322s (userspace) = 9.857s. May 7 23:56:55.336883 kubelet[1549]: E0507 23:56:55.336773 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:56:55.339556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:56:55.339697 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:56:55.340003 systemd[1]: kubelet.service: Consumed 771ms CPU time, 248.3M memory peak. May 7 23:56:58.682659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 7 23:56:58.683767 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:37196.service - OpenSSH per-connection server daemon (10.0.0.1:37196). May 7 23:56:58.755472 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 37196 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:56:58.757098 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:56:58.766576 systemd-logind[1448]: New session 1 of user core. May 7 23:56:58.767535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 7 23:56:58.776622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 7 23:56:58.784956 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 7 23:56:58.787102 systemd[1]: Starting user@500.service - User Manager for UID 500... May 7 23:56:58.792750 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 7 23:56:58.795012 systemd-logind[1448]: New session c1 of user core. May 7 23:56:58.900486 systemd[1566]: Queued start job for default target default.target. May 7 23:56:58.910599 systemd[1566]: Created slice app.slice - User Application Slice. May 7 23:56:58.910722 systemd[1566]: Reached target paths.target - Paths. May 7 23:56:58.910813 systemd[1566]: Reached target timers.target - Timers. May 7 23:56:58.912220 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... May 7 23:56:58.920952 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 7 23:56:58.921001 systemd[1566]: Reached target sockets.target - Sockets. May 7 23:56:58.921034 systemd[1566]: Reached target basic.target - Basic System. May 7 23:56:58.921074 systemd[1566]: Reached target default.target - Main User Target. May 7 23:56:58.921100 systemd[1566]: Startup finished in 121ms. May 7 23:56:58.921309 systemd[1]: Started user@500.service - User Manager for UID 500. May 7 23:56:58.922606 systemd[1]: Started session-1.scope - Session 1 of User core. May 7 23:56:58.982578 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). May 7 23:56:59.022718 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:56:59.023978 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:56:59.028670 systemd-logind[1448]: New session 2 of user core. May 7 23:56:59.040516 systemd[1]: Started session-2.scope - Session 2 of User core. May 7 23:56:59.090958 sshd[1579]: Connection closed by 10.0.0.1 port 37200 May 7 23:56:59.091344 sshd-session[1577]: pam_unix(sshd:session): session closed for user core May 7 23:56:59.104638 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:37200.service: Deactivated successfully. May 7 23:56:59.107785 systemd[1]: session-2.scope: Deactivated successfully. May 7 23:56:59.108465 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. May 7 23:56:59.110509 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:37212.service - OpenSSH per-connection server daemon (10.0.0.1:37212). May 7 23:56:59.111711 systemd-logind[1448]: Removed session 2. May 7 23:56:59.150792 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 37212 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:56:59.151868 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:56:59.155425 systemd-logind[1448]: New session 3 of user core. May 7 23:56:59.165533 systemd[1]: Started session-3.scope - Session 3 of User core. May 7 23:56:59.212828 sshd[1587]: Connection closed by 10.0.0.1 port 37212 May 7 23:56:59.213099 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 7 23:56:59.233237 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:37212.service: Deactivated successfully. May 7 23:56:59.234552 systemd[1]: session-3.scope: Deactivated successfully. May 7 23:56:59.235711 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. May 7 23:56:59.236729 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:37218.service - OpenSSH per-connection server daemon (10.0.0.1:37218). May 7 23:56:59.237428 systemd-logind[1448]: Removed session 3. May 7 23:56:59.276221 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 37218 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:56:59.277346 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:56:59.280915 systemd-logind[1448]: New session 4 of user core. May 7 23:56:59.291570 systemd[1]: Started session-4.scope - Session 4 of User core. May 7 23:56:59.343410 sshd[1595]: Connection closed by 10.0.0.1 port 37218 May 7 23:56:59.343277 sshd-session[1592]: pam_unix(sshd:session): session closed for user core May 7 23:56:59.352306 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:37218.service: Deactivated successfully. May 7 23:56:59.353711 systemd[1]: session-4.scope: Deactivated successfully. May 7 23:56:59.354384 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. May 7 23:56:59.355987 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:37220.service - OpenSSH per-connection server daemon (10.0.0.1:37220). May 7 23:56:59.356704 systemd-logind[1448]: Removed session 4. May 7 23:56:59.395913 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 37220 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:56:59.397003 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:56:59.401218 systemd-logind[1448]: New session 5 of user core. May 7 23:56:59.412593 systemd[1]: Started session-5.scope - Session 5 of User core. May 7 23:56:59.470428 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 7 23:56:59.470711 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:56:59.850682 systemd[1]: Starting docker.service - Docker Application Container Engine... May 7 23:56:59.850755 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 7 23:57:00.106092 dockerd[1625]: time="2025-05-07T23:57:00.105931142Z" level=info msg="Starting up" May 7 23:57:00.294237 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1159685968-merged.mount: Deactivated successfully. May 7 23:57:00.311886 dockerd[1625]: time="2025-05-07T23:57:00.311842715Z" level=info msg="Loading containers: start." May 7 23:57:00.463145 kernel: Initializing XFRM netlink socket May 7 23:57:00.544630 systemd-networkd[1384]: docker0: Link UP May 7 23:57:00.579553 dockerd[1625]: time="2025-05-07T23:57:00.579522367Z" level=info msg="Loading containers: done." May 7 23:57:00.595157 dockerd[1625]: time="2025-05-07T23:57:00.595109565Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 7 23:57:00.595284 dockerd[1625]: time="2025-05-07T23:57:00.595205740Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 7 23:57:00.595445 dockerd[1625]: time="2025-05-07T23:57:00.595428537Z" level=info msg="Daemon has completed initialization" May 7 23:57:00.626905 dockerd[1625]: time="2025-05-07T23:57:00.626850564Z" level=info msg="API listen on /run/docker.sock" May 7 23:57:00.627160 systemd[1]: Started docker.service - Docker Application Container Engine. May 7 23:57:01.364729 containerd[1467]: time="2025-05-07T23:57:01.364674505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 7 23:57:01.958619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279848333.mount: Deactivated successfully. May 7 23:57:03.449756 containerd[1467]: time="2025-05-07T23:57:03.449681705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:03.450134 containerd[1467]: time="2025-05-07T23:57:03.450082575Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 7 23:57:03.451106 containerd[1467]: time="2025-05-07T23:57:03.451071851Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:03.454182 containerd[1467]: time="2025-05-07T23:57:03.454142545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:03.455376 containerd[1467]: time="2025-05-07T23:57:03.455322512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.090598664s" May 7 23:57:03.455414 containerd[1467]: time="2025-05-07T23:57:03.455373217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 7 23:57:03.456036 containerd[1467]: time="2025-05-07T23:57:03.456000360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 7 23:57:04.871509 containerd[1467]: time="2025-05-07T23:57:04.871455877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:04.872517 containerd[1467]: time="2025-05-07T23:57:04.872260456Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 7 23:57:04.873198 containerd[1467]: time="2025-05-07T23:57:04.873161141Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:04.876020 containerd[1467]: time="2025-05-07T23:57:04.875987768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:04.877256 containerd[1467]: time="2025-05-07T23:57:04.877221833Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.421184853s" May 7 23:57:04.877375 containerd[1467]: time="2025-05-07T23:57:04.877256156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 7 23:57:04.878026 containerd[1467]: time="2025-05-07T23:57:04.877842197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 7 23:57:05.590127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 7 23:57:05.601038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:05.698613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:05.702063 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:57:05.737358 kubelet[1887]: E0507 23:57:05.737295 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:57:05.740558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:57:05.740717 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:57:05.741013 systemd[1]: kubelet.service: Consumed 132ms CPU time, 104.3M memory peak. May 7 23:57:06.315663 containerd[1467]: time="2025-05-07T23:57:06.315615346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:06.316658 containerd[1467]: time="2025-05-07T23:57:06.316376890Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 7 23:57:06.317407 containerd[1467]: time="2025-05-07T23:57:06.317336386Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:06.320385 containerd[1467]: time="2025-05-07T23:57:06.320313787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:06.321747 containerd[1467]: time="2025-05-07T23:57:06.321613864Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.443734257s" May 7 23:57:06.321747 containerd[1467]: time="2025-05-07T23:57:06.321650610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 7 23:57:06.322269 containerd[1467]: time="2025-05-07T23:57:06.322231650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 7 23:57:07.378648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708730117.mount: Deactivated successfully. May 7 23:57:07.609673 containerd[1467]: time="2025-05-07T23:57:07.609631229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:07.610713 containerd[1467]: time="2025-05-07T23:57:07.610672984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 7 23:57:07.611515 containerd[1467]: time="2025-05-07T23:57:07.611472525Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:07.613765 containerd[1467]: time="2025-05-07T23:57:07.613693867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:07.614445 containerd[1467]: time="2025-05-07T23:57:07.614297690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.292032846s" May 7 23:57:07.614445 containerd[1467]: time="2025-05-07T23:57:07.614328847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 7 23:57:07.614937 containerd[1467]: time="2025-05-07T23:57:07.614909241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 7 23:57:08.186466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245597840.mount: Deactivated successfully. May 7 23:57:09.265920 containerd[1467]: time="2025-05-07T23:57:09.265861331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.266761 containerd[1467]: time="2025-05-07T23:57:09.266721104Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 7 23:57:09.267604 containerd[1467]: time="2025-05-07T23:57:09.267570989Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.271329 containerd[1467]: time="2025-05-07T23:57:09.271287549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.272458 containerd[1467]: time="2025-05-07T23:57:09.272052864Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.657105755s" May 7 23:57:09.272458 containerd[1467]: time="2025-05-07T23:57:09.272083171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 7 23:57:09.272665 containerd[1467]: time="2025-05-07T23:57:09.272623675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 7 23:57:09.684779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978901447.mount: Deactivated successfully. May 7 23:57:09.688900 containerd[1467]: time="2025-05-07T23:57:09.688854158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.689620 containerd[1467]: time="2025-05-07T23:57:09.689525457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 7 23:57:09.690554 containerd[1467]: time="2025-05-07T23:57:09.690506338Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.692469 containerd[1467]: time="2025-05-07T23:57:09.692432045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:09.693341 containerd[1467]: time="2025-05-07T23:57:09.693306570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 420.6468ms" May 7 23:57:09.693341 containerd[1467]: time="2025-05-07T23:57:09.693337038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 7 23:57:09.693836 containerd[1467]: time="2025-05-07T23:57:09.693804226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 7 23:57:10.233016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795697622.mount: Deactivated successfully. May 7 23:57:12.866162 containerd[1467]: time="2025-05-07T23:57:12.865933726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:12.867137 containerd[1467]: time="2025-05-07T23:57:12.866827437Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 7 23:57:12.867985 containerd[1467]: time="2025-05-07T23:57:12.867943592Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:12.871523 containerd[1467]: time="2025-05-07T23:57:12.871495120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:12.872817 containerd[1467]: time="2025-05-07T23:57:12.872789857Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.178948425s" May 7 23:57:12.872817 containerd[1467]: time="2025-05-07T23:57:12.872820998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 7 23:57:15.897001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 7 23:57:15.906520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:16.007030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:16.010505 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:57:16.044309 kubelet[2050]: E0507 23:57:16.042590 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:57:16.045010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:57:16.045152 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:57:16.045444 systemd[1]: kubelet.service: Consumed 124ms CPU time, 104M memory peak. May 7 23:57:17.337041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:17.337192 systemd[1]: kubelet.service: Consumed 124ms CPU time, 104M memory peak. May 7 23:57:17.345555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:17.365045 systemd[1]: Reload requested from client PID 2065 ('systemctl') (unit session-5.scope)... May 7 23:57:17.365067 systemd[1]: Reloading... May 7 23:57:17.435397 zram_generator::config[2109]: No configuration found. May 7 23:57:17.549606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:57:17.621424 systemd[1]: Reloading finished in 256 ms. May 7 23:57:17.660749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:17.663854 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:17.664472 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:57:17.664664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:17.664698 systemd[1]: kubelet.service: Consumed 79ms CPU time, 90.2M memory peak. May 7 23:57:17.666073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:17.759165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:17.763033 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:57:17.795967 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:57:17.795967 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 7 23:57:17.795967 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:57:17.796238 kubelet[2156]: I0507 23:57:17.796023 2156 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:57:18.218996 kubelet[2156]: I0507 23:57:18.218954 2156 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 7 23:57:18.218996 kubelet[2156]: I0507 23:57:18.218983 2156 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:57:18.219254 kubelet[2156]: I0507 23:57:18.219230 2156 server.go:954] "Client rotation is on, will bootstrap in background" May 7 23:57:18.275237 kubelet[2156]: I0507 23:57:18.275195 2156 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:57:18.277728 kubelet[2156]: E0507 23:57:18.277666 2156 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:18.281083 kubelet[2156]: E0507 23:57:18.281022 2156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:57:18.281083 kubelet[2156]: I0507 23:57:18.281046 2156 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:57:18.283817 kubelet[2156]: I0507 23:57:18.283745 2156 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:57:18.284486 kubelet[2156]: I0507 23:57:18.284454 2156 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:57:18.284644 kubelet[2156]: I0507 23:57:18.284488 2156 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:57:18.284721 kubelet[2156]: I0507 23:57:18.284716 2156 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:57:18.284747 kubelet[2156]: I0507 23:57:18.284724 2156 container_manager_linux.go:304] "Creating device plugin manager" May 7 23:57:18.284910 kubelet[2156]: I0507 23:57:18.284891 2156 state_mem.go:36] "Initialized new in-memory state store" May 7 23:57:18.287357 kubelet[2156]: I0507 23:57:18.287322 2156 kubelet.go:446] "Attempting to sync node with API server" May 7 23:57:18.287357 kubelet[2156]: I0507 23:57:18.287343 2156 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:57:18.287445 kubelet[2156]: I0507 23:57:18.287375 2156 kubelet.go:352] "Adding apiserver pod source" May 7 23:57:18.287445 kubelet[2156]: I0507 23:57:18.287387 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:57:18.293915 kubelet[2156]: I0507 23:57:18.293620 2156 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:57:18.294384 kubelet[2156]: I0507 23:57:18.294152 2156 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:57:18.294384 kubelet[2156]: W0507 23:57:18.294267 2156 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 7 23:57:18.294518 kubelet[2156]: W0507 23:57:18.294485 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:18.294622 kubelet[2156]: E0507 23:57:18.294604 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:18.294808 kubelet[2156]: W0507 23:57:18.294767 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:18.294853 kubelet[2156]: E0507 23:57:18.294814 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:18.295069 kubelet[2156]: I0507 23:57:18.295042 2156 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 7 23:57:18.295108 kubelet[2156]: I0507 23:57:18.295074 2156 server.go:1287] "Started kubelet" May 7 23:57:18.295512 kubelet[2156]: I0507 23:57:18.295160 2156 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:57:18.295919 kubelet[2156]: I0507 23:57:18.295855 2156 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:57:18.296829 kubelet[2156]: I0507 23:57:18.296799 2156 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:57:18.299769 kubelet[2156]: I0507 23:57:18.298047 2156 server.go:490] "Adding debug handlers to kubelet server" May 7 23:57:18.299769 kubelet[2156]: E0507 23:57:18.298000 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d640d302a5f96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-07 23:57:18.295056278 +0000 UTC m=+0.529218628,LastTimestamp:2025-05-07 23:57:18.295056278 +0000 UTC m=+0.529218628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 7 23:57:18.301232 kubelet[2156]: I0507 23:57:18.301210 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:57:18.301299 kubelet[2156]: I0507 23:57:18.301290 2156 volume_manager.go:297] "Starting Kubelet Volume Manager" May 7 23:57:18.301410 kubelet[2156]: I0507 23:57:18.301393 2156 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:57:18.301698 kubelet[2156]: I0507 23:57:18.301447 2156 reconciler.go:26] "Reconciler: start to sync state" May 7 23:57:18.301752 kubelet[2156]: W0507 23:57:18.301702 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:18.301752 kubelet[2156]: I0507 23:57:18.301725 2156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:57:18.301752 kubelet[2156]: E0507 23:57:18.301737 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:18.302183 kubelet[2156]: E0507 23:57:18.302163 2156 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:57:18.302774 kubelet[2156]: E0507 23:57:18.302513 2156 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:57:18.302856 kubelet[2156]: E0507 23:57:18.302596 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" May 7 23:57:18.302965 kubelet[2156]: I0507 23:57:18.302942 2156 factory.go:221] Registration of the systemd container factory successfully May 7 23:57:18.303025 kubelet[2156]: I0507 23:57:18.303007 2156 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:57:18.303888 kubelet[2156]: I0507 23:57:18.303870 2156 factory.go:221] Registration of the containerd container factory successfully May 7 23:57:18.313911 kubelet[2156]: I0507 23:57:18.313893 2156 cpu_manager.go:221] "Starting CPU manager" policy="none" May 7 23:57:18.313911 kubelet[2156]: I0507 23:57:18.313906 2156 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 7 23:57:18.314003 kubelet[2156]: I0507 23:57:18.313921 2156 state_mem.go:36] "Initialized new in-memory state store" May 7 23:57:18.314765 kubelet[2156]: I0507 23:57:18.314583 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:57:18.315728 kubelet[2156]: I0507 23:57:18.315702 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:57:18.316039 kubelet[2156]: I0507 23:57:18.315756 2156 status_manager.go:227] "Starting to sync pod status with apiserver" May 7 23:57:18.316039 kubelet[2156]: I0507 23:57:18.315774 2156 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 7 23:57:18.316039 kubelet[2156]: I0507 23:57:18.315783 2156 kubelet.go:2388] "Starting kubelet main sync loop" May 7 23:57:18.316039 kubelet[2156]: E0507 23:57:18.315827 2156 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:57:18.316039 kubelet[2156]: I0507 23:57:18.316026 2156 policy_none.go:49] "None policy: Start" May 7 23:57:18.316039 kubelet[2156]: I0507 23:57:18.316041 2156 memory_manager.go:186] "Starting memorymanager" policy="None" May 7 23:57:18.316150 kubelet[2156]: I0507 23:57:18.316052 2156 state_mem.go:35] "Initializing new in-memory state store" May 7 23:57:18.317113 kubelet[2156]: W0507 23:57:18.317072 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:18.317306 kubelet[2156]: E0507 23:57:18.317127 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:18.322885 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 7 23:57:18.336739 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 7 23:57:18.339929 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 7 23:57:18.352160 kubelet[2156]: I0507 23:57:18.352005 2156 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:57:18.352255 kubelet[2156]: I0507 23:57:18.352194 2156 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:57:18.352255 kubelet[2156]: I0507 23:57:18.352206 2156 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:57:18.352948 kubelet[2156]: E0507 23:57:18.352920 2156 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 7 23:57:18.352948 kubelet[2156]: E0507 23:57:18.352955 2156 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 7 23:57:18.352948 kubelet[2156]: I0507 23:57:18.353021 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:57:18.423469 systemd[1]: Created slice kubepods-burstable-pod407ad8225ef7280e067b2a9bd62815ca.slice - libcontainer container kubepods-burstable-pod407ad8225ef7280e067b2a9bd62815ca.slice. May 7 23:57:18.446703 kubelet[2156]: E0507 23:57:18.446519 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:18.448771 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 7 23:57:18.450609 kubelet[2156]: E0507 23:57:18.450577 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:18.452862 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 7 23:57:18.453980 kubelet[2156]: I0507 23:57:18.453849 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:57:18.454692 kubelet[2156]: E0507 23:57:18.454320 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" May 7 23:57:18.455132 kubelet[2156]: E0507 23:57:18.454985 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:18.508497 kubelet[2156]: E0507 23:57:18.508393 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" May 7 23:57:18.602823 kubelet[2156]: I0507 23:57:18.602769 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:18.602823 kubelet[2156]: I0507 23:57:18.602811 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:18.602823 kubelet[2156]: I0507 23:57:18.602828 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:18.602970 kubelet[2156]: I0507 23:57:18.602845 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:18.602970 kubelet[2156]: I0507 23:57:18.602860 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:18.602970 kubelet[2156]: I0507 23:57:18.602877 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 7 23:57:18.602970 kubelet[2156]: I0507 23:57:18.602891 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:18.602970 kubelet[2156]: I0507 23:57:18.602906 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:18.603090 kubelet[2156]: I0507 23:57:18.602921 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:18.656291 kubelet[2156]: I0507 23:57:18.656259 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:57:18.656623 kubelet[2156]: E0507 23:57:18.656582 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" May 7 23:57:18.724484 kubelet[2156]: E0507 23:57:18.724384 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d640d302a5f96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-07 23:57:18.295056278 +0000 UTC m=+0.529218628,LastTimestamp:2025-05-07 23:57:18.295056278 +0000 UTC m=+0.529218628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 7 23:57:18.752319 kubelet[2156]: E0507 23:57:18.752262 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:18.752430 kubelet[2156]: E0507 23:57:18.752277 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:18.756043 kubelet[2156]: E0507 23:57:18.756004 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:18.756474 containerd[1467]: time="2025-05-07T23:57:18.756440385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 7 23:57:18.756755 containerd[1467]: time="2025-05-07T23:57:18.756437501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:407ad8225ef7280e067b2a9bd62815ca,Namespace:kube-system,Attempt:0,}" May 7 23:57:18.756885 containerd[1467]: time="2025-05-07T23:57:18.756855914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 7 23:57:18.911435 kubelet[2156]: E0507 23:57:18.911398 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" May 7 23:57:19.058743 kubelet[2156]: I0507 23:57:19.058409 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:57:19.058743 kubelet[2156]: E0507 23:57:19.058732 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" May 7 23:57:19.346155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658428857.mount: Deactivated successfully. May 7 23:57:19.349103 containerd[1467]: time="2025-05-07T23:57:19.349067804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:57:19.350437 containerd[1467]: time="2025-05-07T23:57:19.350401313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 7 23:57:19.351779 containerd[1467]: time="2025-05-07T23:57:19.351740069Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:57:19.355967 containerd[1467]: time="2025-05-07T23:57:19.353417899Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:57:19.355967 containerd[1467]: time="2025-05-07T23:57:19.354873564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:57:19.356426 containerd[1467]: time="2025-05-07T23:57:19.356396356Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:57:19.357699 containerd[1467]: time="2025-05-07T23:57:19.357651164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:57:19.358336 containerd[1467]: time="2025-05-07T23:57:19.358304641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:57:19.359925 containerd[1467]: time="2025-05-07T23:57:19.359901488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.383791ms" May 7 23:57:19.360596 containerd[1467]: time="2025-05-07T23:57:19.360529172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.938728ms" May 7 23:57:19.364411 containerd[1467]: time="2025-05-07T23:57:19.364380227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.469394ms" May 7 23:57:19.390761 kubelet[2156]: W0507 23:57:19.390630 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:19.390761 kubelet[2156]: E0507 23:57:19.390694 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:19.484491 containerd[1467]: time="2025-05-07T23:57:19.484172626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:19.484491 containerd[1467]: time="2025-05-07T23:57:19.484274997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:19.484491 containerd[1467]: time="2025-05-07T23:57:19.484291138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.484491 containerd[1467]: time="2025-05-07T23:57:19.484385940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.485015 containerd[1467]: time="2025-05-07T23:57:19.484939649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:19.485898 containerd[1467]: time="2025-05-07T23:57:19.485710397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:19.485898 containerd[1467]: time="2025-05-07T23:57:19.485733306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.485898 containerd[1467]: time="2025-05-07T23:57:19.485811206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.486656 containerd[1467]: time="2025-05-07T23:57:19.486545748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:19.486656 containerd[1467]: time="2025-05-07T23:57:19.486602460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:19.486656 containerd[1467]: time="2025-05-07T23:57:19.486613594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.486796 containerd[1467]: time="2025-05-07T23:57:19.486679639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:19.506548 systemd[1]: Started cri-containerd-b4e3534ef7dd5f682dedae14da3024f39cca4c1061717a5b686339fa85365673.scope - libcontainer container b4e3534ef7dd5f682dedae14da3024f39cca4c1061717a5b686339fa85365673. May 7 23:57:19.510981 systemd[1]: Started cri-containerd-a5b6c7594b07ded56238082ac9b25811d4551437d53c454c00bc32982e98d7b8.scope - libcontainer container a5b6c7594b07ded56238082ac9b25811d4551437d53c454c00bc32982e98d7b8. May 7 23:57:19.513436 systemd[1]: Started cri-containerd-fc4faff4ec8e2c90fe0b33d52ed45fadb97ad630145cec671f68f3509641ae04.scope - libcontainer container fc4faff4ec8e2c90fe0b33d52ed45fadb97ad630145cec671f68f3509641ae04. May 7 23:57:19.532155 kubelet[2156]: W0507 23:57:19.532050 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:19.532155 kubelet[2156]: E0507 23:57:19.532090 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:19.547669 containerd[1467]: time="2025-05-07T23:57:19.547321875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:407ad8225ef7280e067b2a9bd62815ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b6c7594b07ded56238082ac9b25811d4551437d53c454c00bc32982e98d7b8\"" May 7 23:57:19.548645 containerd[1467]: time="2025-05-07T23:57:19.548613290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4e3534ef7dd5f682dedae14da3024f39cca4c1061717a5b686339fa85365673\"" May 7 23:57:19.550368 kubelet[2156]: E0507 23:57:19.549820 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:19.550368 kubelet[2156]: E0507 23:57:19.550046 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:19.553734 containerd[1467]: time="2025-05-07T23:57:19.553619305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc4faff4ec8e2c90fe0b33d52ed45fadb97ad630145cec671f68f3509641ae04\"" May 7 23:57:19.554214 kubelet[2156]: E0507 23:57:19.554194 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:19.554488 containerd[1467]: time="2025-05-07T23:57:19.554451852Z" level=info msg="CreateContainer within sandbox \"a5b6c7594b07ded56238082ac9b25811d4551437d53c454c00bc32982e98d7b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 7 23:57:19.554949 containerd[1467]: time="2025-05-07T23:57:19.554913644Z" level=info msg="CreateContainer within sandbox \"b4e3534ef7dd5f682dedae14da3024f39cca4c1061717a5b686339fa85365673\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 7 23:57:19.556205 containerd[1467]: time="2025-05-07T23:57:19.556103969Z" level=info msg="CreateContainer within sandbox \"fc4faff4ec8e2c90fe0b33d52ed45fadb97ad630145cec671f68f3509641ae04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 7 23:57:19.566138 kubelet[2156]: W0507 23:57:19.565082 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:19.566138 kubelet[2156]: E0507 23:57:19.565141 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:19.574429 containerd[1467]: time="2025-05-07T23:57:19.574398134Z" level=info msg="CreateContainer within sandbox \"b4e3534ef7dd5f682dedae14da3024f39cca4c1061717a5b686339fa85365673\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d985e4621c6a53b181766e5ea08645ec7caad09400bab7e4cbe04541f3270b93\"" May 7 23:57:19.574832 containerd[1467]: time="2025-05-07T23:57:19.574790717Z" level=info msg="CreateContainer within sandbox \"fc4faff4ec8e2c90fe0b33d52ed45fadb97ad630145cec671f68f3509641ae04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"194828e971d748f46dedb01b948a3a393fa0e1ebe26380685c2094a267757a5c\"" May 7 23:57:19.578736 containerd[1467]: time="2025-05-07T23:57:19.578344632Z" level=info msg="StartContainer for \"d985e4621c6a53b181766e5ea08645ec7caad09400bab7e4cbe04541f3270b93\"" May 7 23:57:19.578964 containerd[1467]: time="2025-05-07T23:57:19.578941276Z" level=info msg="StartContainer for \"194828e971d748f46dedb01b948a3a393fa0e1ebe26380685c2094a267757a5c\"" May 7 23:57:19.581983 containerd[1467]: time="2025-05-07T23:57:19.581948971Z" level=info msg="CreateContainer within sandbox \"a5b6c7594b07ded56238082ac9b25811d4551437d53c454c00bc32982e98d7b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"498d600ed5b8d950b92cf0815c9a39bd296412a483b0cb31bc8c35b3a851f55e\"" May 7 23:57:19.582479 containerd[1467]: time="2025-05-07T23:57:19.582436756Z" level=info msg="StartContainer for \"498d600ed5b8d950b92cf0815c9a39bd296412a483b0cb31bc8c35b3a851f55e\"" May 7 23:57:19.608591 systemd[1]: Started cri-containerd-194828e971d748f46dedb01b948a3a393fa0e1ebe26380685c2094a267757a5c.scope - libcontainer container 194828e971d748f46dedb01b948a3a393fa0e1ebe26380685c2094a267757a5c. May 7 23:57:19.610396 systemd[1]: Started cri-containerd-d985e4621c6a53b181766e5ea08645ec7caad09400bab7e4cbe04541f3270b93.scope - libcontainer container d985e4621c6a53b181766e5ea08645ec7caad09400bab7e4cbe04541f3270b93. May 7 23:57:19.614279 systemd[1]: Started cri-containerd-498d600ed5b8d950b92cf0815c9a39bd296412a483b0cb31bc8c35b3a851f55e.scope - libcontainer container 498d600ed5b8d950b92cf0815c9a39bd296412a483b0cb31bc8c35b3a851f55e. May 7 23:57:19.647410 containerd[1467]: time="2025-05-07T23:57:19.646662263Z" level=info msg="StartContainer for \"194828e971d748f46dedb01b948a3a393fa0e1ebe26380685c2094a267757a5c\" returns successfully" May 7 23:57:19.661302 containerd[1467]: time="2025-05-07T23:57:19.661266499Z" level=info msg="StartContainer for \"d985e4621c6a53b181766e5ea08645ec7caad09400bab7e4cbe04541f3270b93\" returns successfully" May 7 23:57:19.661528 containerd[1467]: time="2025-05-07T23:57:19.661452418Z" level=info msg="StartContainer for \"498d600ed5b8d950b92cf0815c9a39bd296412a483b0cb31bc8c35b3a851f55e\" returns successfully" May 7 23:57:19.706643 kubelet[2156]: W0507 23:57:19.706589 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused May 7 23:57:19.706800 kubelet[2156]: E0507 23:57:19.706779 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" May 7 23:57:19.712471 kubelet[2156]: E0507 23:57:19.712441 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" May 7 23:57:19.862514 kubelet[2156]: I0507 23:57:19.860817 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:57:20.347743 kubelet[2156]: E0507 23:57:20.347684 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:20.350903 kubelet[2156]: E0507 23:57:20.347817 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:20.350903 kubelet[2156]: E0507 23:57:20.349582 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:20.350903 kubelet[2156]: E0507 23:57:20.349673 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:20.351454 kubelet[2156]: E0507 23:57:20.351434 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:20.351740 kubelet[2156]: E0507 23:57:20.351697 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:21.353230 kubelet[2156]: E0507 23:57:21.353012 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:21.353230 kubelet[2156]: E0507 23:57:21.353137 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:21.354021 kubelet[2156]: E0507 23:57:21.353868 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 7 23:57:21.354021 kubelet[2156]: E0507 23:57:21.353977 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:21.546289 kubelet[2156]: E0507 23:57:21.546244 2156 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 7 23:57:21.669999 kubelet[2156]: I0507 23:57:21.669386 2156 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 7 23:57:21.702737 kubelet[2156]: I0507 23:57:21.702708 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:57:21.709971 kubelet[2156]: E0507 23:57:21.709932 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 7 23:57:21.709971 kubelet[2156]: I0507 23:57:21.709958 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 7 23:57:21.711614 kubelet[2156]: E0507 23:57:21.711568 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 7 23:57:21.711614 kubelet[2156]: I0507 23:57:21.711591 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:57:21.712920 kubelet[2156]: E0507 23:57:21.712887 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 7 23:57:22.345090 kubelet[2156]: I0507 23:57:22.345041 2156 apiserver.go:52] "Watching apiserver" May 7 23:57:22.352483 kubelet[2156]: I0507 23:57:22.352465 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:57:22.354552 kubelet[2156]: E0507 23:57:22.354503 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 7 23:57:22.354829 kubelet[2156]: E0507 23:57:22.354651 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:22.402577 kubelet[2156]: I0507 23:57:22.402538 2156 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:57:23.712319 systemd[1]: Reload requested from client PID 2437 ('systemctl') (unit session-5.scope)... May 7 23:57:23.712342 systemd[1]: Reloading... May 7 23:57:23.787555 zram_generator::config[2481]: No configuration found. May 7 23:57:23.868265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:57:23.954054 systemd[1]: Reloading finished in 240 ms. May 7 23:57:23.975312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:23.985390 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:57:23.985664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:23.985718 systemd[1]: kubelet.service: Consumed 905ms CPU time, 123.4M memory peak. May 7 23:57:23.995598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:57:24.109460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:57:24.112711 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:57:24.161434 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:57:24.161434 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 7 23:57:24.161434 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:57:24.161784 kubelet[2523]: I0507 23:57:24.161478 2523 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:57:24.169982 kubelet[2523]: I0507 23:57:24.168569 2523 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 7 23:57:24.169982 kubelet[2523]: I0507 23:57:24.168593 2523 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:57:24.169982 kubelet[2523]: I0507 23:57:24.168978 2523 server.go:954] "Client rotation is on, will bootstrap in background" May 7 23:57:24.170437 kubelet[2523]: I0507 23:57:24.170408 2523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 7 23:57:24.175382 kubelet[2523]: I0507 23:57:24.175336 2523 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:57:24.178336 kubelet[2523]: E0507 23:57:24.178277 2523 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 7 23:57:24.178336 kubelet[2523]: I0507 23:57:24.178312 2523 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 7 23:57:24.180880 kubelet[2523]: I0507 23:57:24.180832 2523 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:57:24.181044 kubelet[2523]: I0507 23:57:24.181008 2523 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:57:24.181187 kubelet[2523]: I0507 23:57:24.181031 2523 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 7 23:57:24.181271 kubelet[2523]: I0507 23:57:24.181190 2523 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:57:24.181271 kubelet[2523]: I0507 23:57:24.181199 2523 container_manager_linux.go:304] "Creating device plugin manager" May 7 23:57:24.181271 kubelet[2523]: I0507 23:57:24.181241 2523 state_mem.go:36] "Initialized new in-memory state store" May 7 23:57:24.181387 kubelet[2523]: I0507 23:57:24.181375 2523 kubelet.go:446] "Attempting to sync node with API server" May 7 23:57:24.181444 kubelet[2523]: I0507 23:57:24.181391 2523 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:57:24.181444 kubelet[2523]: I0507 23:57:24.181407 2523 kubelet.go:352] "Adding apiserver pod source" May 7 23:57:24.181444 kubelet[2523]: I0507 23:57:24.181415 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.183398 2523 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.183868 2523 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.184288 2523 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.184311 2523 server.go:1287] "Started kubelet" May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.184443 2523 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.184559 2523 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:57:24.185154 kubelet[2523]: I0507 23:57:24.184761 2523 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:57:24.188628 kubelet[2523]: I0507 23:57:24.188596 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:57:24.193182 kubelet[2523]: E0507 23:57:24.193151 2523 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:57:24.193182 kubelet[2523]: I0507 23:57:24.193167 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 7 23:57:24.194540 kubelet[2523]: I0507 23:57:24.193479 2523 volume_manager.go:297] "Starting Kubelet Volume Manager" May 7 23:57:24.194540 kubelet[2523]: E0507 23:57:24.193582 2523 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:57:24.194540 kubelet[2523]: I0507 23:57:24.194095 2523 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:57:24.194540 kubelet[2523]: I0507 23:57:24.194213 2523 reconciler.go:26] "Reconciler: start to sync state" May 7 23:57:24.201308 kubelet[2523]: I0507 23:57:24.201145 2523 server.go:490] "Adding debug handlers to kubelet server" May 7 23:57:24.202255 kubelet[2523]: I0507 23:57:24.202232 2523 factory.go:221] Registration of the containerd container factory successfully May 7 23:57:24.202255 kubelet[2523]: I0507 23:57:24.202251 2523 factory.go:221] Registration of the systemd container factory successfully May 7 23:57:24.202371 kubelet[2523]: I0507 23:57:24.202314 2523 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:57:24.207106 kubelet[2523]: I0507 23:57:24.207077 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:57:24.209271 kubelet[2523]: I0507 23:57:24.209250 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:57:24.209406 kubelet[2523]: I0507 23:57:24.209395 2523 status_manager.go:227] "Starting to sync pod status with apiserver" May 7 23:57:24.209484 kubelet[2523]: I0507 23:57:24.209472 2523 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 7 23:57:24.209528 kubelet[2523]: I0507 23:57:24.209520 2523 kubelet.go:2388] "Starting kubelet main sync loop" May 7 23:57:24.209817 kubelet[2523]: E0507 23:57:24.209604 2523 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:57:24.252540 kubelet[2523]: I0507 23:57:24.252447 2523 cpu_manager.go:221] "Starting CPU manager" policy="none" May 7 23:57:24.252540 kubelet[2523]: I0507 23:57:24.252467 2523 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 7 23:57:24.252540 kubelet[2523]: I0507 23:57:24.252487 2523 state_mem.go:36] "Initialized new in-memory state store" May 7 23:57:24.252693 kubelet[2523]: I0507 23:57:24.252629 2523 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 7 23:57:24.252693 kubelet[2523]: I0507 23:57:24.252641 2523 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 7 23:57:24.252693 kubelet[2523]: I0507 23:57:24.252659 2523 policy_none.go:49] "None policy: Start" May 7 23:57:24.252693 kubelet[2523]: I0507 23:57:24.252667 2523 memory_manager.go:186] "Starting memorymanager" policy="None" May 7 23:57:24.252693 kubelet[2523]: I0507 23:57:24.252676 2523 state_mem.go:35] "Initializing new in-memory state store" May 7 23:57:24.252814 kubelet[2523]: I0507 23:57:24.252770 2523 state_mem.go:75] "Updated machine memory state" May 7 23:57:24.256624 kubelet[2523]: I0507 23:57:24.256591 2523 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:57:24.256885 kubelet[2523]: I0507 23:57:24.256761 2523 eviction_manager.go:189] "Eviction manager: starting control loop" May 7 23:57:24.256885 kubelet[2523]: I0507 23:57:24.256778 2523 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:57:24.257100 kubelet[2523]: I0507 23:57:24.256966 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:57:24.258200 kubelet[2523]: E0507 23:57:24.258173 2523 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 7 23:57:24.311788 kubelet[2523]: I0507 23:57:24.311412 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:57:24.311788 kubelet[2523]: I0507 23:57:24.311525 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:57:24.311920 kubelet[2523]: I0507 23:57:24.311854 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.361772 kubelet[2523]: I0507 23:57:24.361749 2523 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 7 23:57:24.370539 kubelet[2523]: I0507 23:57:24.370507 2523 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 7 23:57:24.370610 kubelet[2523]: I0507 23:57:24.370589 2523 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 7 23:57:24.494908 kubelet[2523]: I0507 23:57:24.494857 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.494908 kubelet[2523]: I0507 23:57:24.494899 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.494908 kubelet[2523]: I0507 23:57:24.494916 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.495048 kubelet[2523]: I0507 23:57:24.494932 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.495048 kubelet[2523]: I0507 23:57:24.494952 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:24.495048 kubelet[2523]: I0507 23:57:24.494965 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:24.495048 kubelet[2523]: I0507 23:57:24.494980 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407ad8225ef7280e067b2a9bd62815ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"407ad8225ef7280e067b2a9bd62815ca\") " pod="kube-system/kube-apiserver-localhost" May 7 23:57:24.495048 kubelet[2523]: I0507 23:57:24.494995 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:57:24.495146 kubelet[2523]: I0507 23:57:24.495017 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 7 23:57:24.618445 kubelet[2523]: E0507 23:57:24.617806 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:24.618445 kubelet[2523]: E0507 23:57:24.618221 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:24.618445 kubelet[2523]: E0507 23:57:24.618251 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:25.182534 kubelet[2523]: I0507 23:57:25.182320 2523 apiserver.go:52] "Watching apiserver" May 7 23:57:25.194325 kubelet[2523]: I0507 23:57:25.194278 2523 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:57:25.234345 kubelet[2523]: I0507 23:57:25.234258 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 7 23:57:25.234589 kubelet[2523]: I0507 23:57:25.234501 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 7 23:57:25.234766 kubelet[2523]: E0507 23:57:25.234749 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:25.245434 kubelet[2523]: E0507 23:57:25.245399 2523 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 7 23:57:25.245598 kubelet[2523]: E0507 23:57:25.245546 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:25.250165 kubelet[2523]: E0507 23:57:25.250105 2523 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 7 23:57:25.250295 kubelet[2523]: E0507 23:57:25.250275 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:25.268604 kubelet[2523]: I0507 23:57:25.268477 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.268461473 podStartE2EDuration="1.268461473s" podCreationTimestamp="2025-05-07 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:25.261887889 +0000 UTC m=+1.145730207" watchObservedRunningTime="2025-05-07 23:57:25.268461473 +0000 UTC m=+1.152303791" May 7 23:57:25.276746 kubelet[2523]: I0507 23:57:25.276691 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.276676001 podStartE2EDuration="1.276676001s" podCreationTimestamp="2025-05-07 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:25.268922698 +0000 UTC m=+1.152765016" watchObservedRunningTime="2025-05-07 23:57:25.276676001 +0000 UTC m=+1.160518279" May 7 23:57:25.276872 kubelet[2523]: I0507 23:57:25.276787 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.276783423 podStartE2EDuration="1.276783423s" podCreationTimestamp="2025-05-07 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:25.27658759 +0000 UTC m=+1.160429948" watchObservedRunningTime="2025-05-07 23:57:25.276783423 +0000 UTC m=+1.160625741" May 7 23:57:25.619649 sudo[1604]: pam_unix(sudo:session): session closed for user root May 7 23:57:25.621398 sshd[1603]: Connection closed by 10.0.0.1 port 37220 May 7 23:57:25.622647 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 7 23:57:25.625484 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:37220.service: Deactivated successfully. May 7 23:57:25.629388 systemd[1]: session-5.scope: Deactivated successfully. May 7 23:57:25.629581 systemd[1]: session-5.scope: Consumed 5.400s CPU time, 226.5M memory peak. May 7 23:57:25.631386 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. May 7 23:57:25.632517 systemd-logind[1448]: Removed session 5. May 7 23:57:26.238439 kubelet[2523]: E0507 23:57:26.237814 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:26.238439 kubelet[2523]: E0507 23:57:26.238061 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:27.239499 kubelet[2523]: E0507 23:57:27.239471 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:29.301944 kubelet[2523]: I0507 23:57:29.301914 2523 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 7 23:57:29.302431 containerd[1467]: time="2025-05-07T23:57:29.302397543Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 7 23:57:29.302656 kubelet[2523]: I0507 23:57:29.302598 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 7 23:57:29.575470 kubelet[2523]: E0507 23:57:29.575293 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:57:30.169918 systemd[1]: Created slice kubepods-besteffort-pod46714aa0_2787_4b36_ba3b_5d90461ffc97.slice - libcontainer container kubepods-besteffort-pod46714aa0_2787_4b36_ba3b_5d90461ffc97.slice. May 7 23:57:30.193609 systemd[1]: Created slice kubepods-burstable-poddfdf3d5f_757b_421e_a102_f2c7e0602433.slice - libcontainer container kubepods-burstable-poddfdf3d5f_757b_421e_a102_f2c7e0602433.slice. May 7 23:57:30.232258 kubelet[2523]: I0507 23:57:30.232216 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfdf3d5f-757b-421e-a102-f2c7e0602433-xtables-lock\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232258 kubelet[2523]: I0507 23:57:30.232256 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dfdf3d5f-757b-421e-a102-f2c7e0602433-cni-plugin\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232434 kubelet[2523]: I0507 23:57:30.232272 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mqx\" (UniqueName: \"kubernetes.io/projected/dfdf3d5f-757b-421e-a102-f2c7e0602433-kube-api-access-n2mqx\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232434 kubelet[2523]: I0507 23:57:30.232293 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46714aa0-2787-4b36-ba3b-5d90461ffc97-kube-proxy\") pod \"kube-proxy-zfh58\" (UID: \"46714aa0-2787-4b36-ba3b-5d90461ffc97\") " pod="kube-system/kube-proxy-zfh58" May 7 23:57:30.232434 kubelet[2523]: I0507 23:57:30.232309 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jg7\" (UniqueName: \"kubernetes.io/projected/46714aa0-2787-4b36-ba3b-5d90461ffc97-kube-api-access-95jg7\") pod \"kube-proxy-zfh58\" (UID: \"46714aa0-2787-4b36-ba3b-5d90461ffc97\") " pod="kube-system/kube-proxy-zfh58" May 7 23:57:30.232434 kubelet[2523]: I0507 23:57:30.232323 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dfdf3d5f-757b-421e-a102-f2c7e0602433-run\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232434 kubelet[2523]: I0507 23:57:30.232336 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dfdf3d5f-757b-421e-a102-f2c7e0602433-cni\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232537 kubelet[2523]: I0507 23:57:30.232364 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dfdf3d5f-757b-421e-a102-f2c7e0602433-flannel-cfg\") pod \"kube-flannel-ds-6gdxx\" (UID: \"dfdf3d5f-757b-421e-a102-f2c7e0602433\") " pod="kube-flannel/kube-flannel-ds-6gdxx" May 7 23:57:30.232537 kubelet[2523]: I0507 23:57:30.232382 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46714aa0-2787-4b36-ba3b-5d90461ffc97-xtables-lock\") pod \"kube-proxy-zfh58\" (UID: \"46714aa0-2787-4b36-ba3b-5d90461ffc97\") " pod="kube-system/kube-proxy-zfh58" May 7 23:57:30.232537 kubelet[2523]: I0507 23:57:30.232398 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46714aa0-2787-4b36-ba3b-5d90461ffc97-lib-modules\") pod \"kube-proxy-zfh58\" (UID: \"46714aa0-2787-4b36-ba3b-5d90461ffc97\") " pod="kube-system/kube-proxy-zfh58" May 7 23:57:30.489147 containerd[1467]: time="2025-05-07T23:57:30.489016089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfh58,Uid:46714aa0-2787-4b36-ba3b-5d90461ffc97,Namespace:kube-system,Attempt:0,}" May 7 23:57:30.498678 containerd[1467]: time="2025-05-07T23:57:30.498189202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6gdxx,Uid:dfdf3d5f-757b-421e-a102-f2c7e0602433,Namespace:kube-flannel,Attempt:0,}" May 7 23:57:30.513815 containerd[1467]: time="2025-05-07T23:57:30.513735818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:30.513973 containerd[1467]: time="2025-05-07T23:57:30.513826274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:30.513973 containerd[1467]: time="2025-05-07T23:57:30.513850049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:30.514958 containerd[1467]: time="2025-05-07T23:57:30.514537674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:30.521913 containerd[1467]: time="2025-05-07T23:57:30.521786598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:30.522516 containerd[1467]: time="2025-05-07T23:57:30.522434439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:30.522516 containerd[1467]: time="2025-05-07T23:57:30.522456332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:30.522653 containerd[1467]: time="2025-05-07T23:57:30.522545868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:30.531510 systemd[1]: Started cri-containerd-de1dc3ccf962961c2b65e92e5025a2f586bbc1d3f88cf5e62a6e873a5938824f.scope - libcontainer container de1dc3ccf962961c2b65e92e5025a2f586bbc1d3f88cf5e62a6e873a5938824f. May 7 23:57:30.534628 systemd[1]: Started cri-containerd-628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497.scope - libcontainer container 628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497. May 7 23:57:30.556323 containerd[1467]: time="2025-05-07T23:57:30.556271128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfh58,Uid:46714aa0-2787-4b36-ba3b-5d90461ffc97,Namespace:kube-system,Attempt:0,} returns sandbox id \"de1dc3ccf962961c2b65e92e5025a2f586bbc1d3f88cf5e62a6e873a5938824f\"" May 7 23:57:30.559948 containerd[1467]: time="2025-05-07T23:57:30.559821324Z" level=info msg="CreateContainer within sandbox \"de1dc3ccf962961c2b65e92e5025a2f586bbc1d3f88cf5e62a6e873a5938824f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 7 23:57:30.573183 containerd[1467]: time="2025-05-07T23:57:30.573065595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6gdxx,Uid:dfdf3d5f-757b-421e-a102-f2c7e0602433,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\"" May 7 23:57:30.574736 containerd[1467]: time="2025-05-07T23:57:30.574605068Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 7 23:57:30.579681 containerd[1467]: time="2025-05-07T23:57:30.579648907Z" level=info msg="CreateContainer within sandbox \"de1dc3ccf962961c2b65e92e5025a2f586bbc1d3f88cf5e62a6e873a5938824f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa3dd13ef5c9a8fa386ff7ae740abebc9a72a5cc636ecc02e1c22ee17d6771fe\"" May 7 23:57:30.580246 containerd[1467]: time="2025-05-07T23:57:30.580220701Z" level=info msg="StartContainer for \"fa3dd13ef5c9a8fa386ff7ae740abebc9a72a5cc636ecc02e1c22ee17d6771fe\"" May 7 23:57:30.609542 systemd[1]: Started cri-containerd-fa3dd13ef5c9a8fa386ff7ae740abebc9a72a5cc636ecc02e1c22ee17d6771fe.scope - libcontainer container fa3dd13ef5c9a8fa386ff7ae740abebc9a72a5cc636ecc02e1c22ee17d6771fe. May 7 23:57:30.634637 containerd[1467]: time="2025-05-07T23:57:30.634594653Z" level=info msg="StartContainer for \"fa3dd13ef5c9a8fa386ff7ae740abebc9a72a5cc636ecc02e1c22ee17d6771fe\" returns successfully" May 7 23:57:31.722131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042468886.mount: Deactivated successfully. May 7 23:57:31.748972 containerd[1467]: time="2025-05-07T23:57:31.748933528Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:31.749740 containerd[1467]: time="2025-05-07T23:57:31.749390475Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 7 23:57:31.751401 containerd[1467]: time="2025-05-07T23:57:31.750339711Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:31.754950 containerd[1467]: time="2025-05-07T23:57:31.753696475Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:31.754950 containerd[1467]: time="2025-05-07T23:57:31.754520518Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.17988255s" May 7 23:57:31.754950 containerd[1467]: time="2025-05-07T23:57:31.754544452Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 7 23:57:31.757151 containerd[1467]: time="2025-05-07T23:57:31.757115516Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 7 23:57:31.766388 containerd[1467]: time="2025-05-07T23:57:31.766317982Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15\"" May 7 23:57:31.766746 containerd[1467]: time="2025-05-07T23:57:31.766710492Z" level=info msg="StartContainer for \"a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15\"" May 7 23:57:31.768526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217429645.mount: Deactivated successfully. May 7 23:57:31.793497 systemd[1]: Started cri-containerd-a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15.scope - libcontainer container a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15. May 7 23:57:31.818178 containerd[1467]: time="2025-05-07T23:57:31.818131227Z" level=info msg="StartContainer for \"a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15\" returns successfully" May 7 23:57:31.818548 systemd[1]: cri-containerd-a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15.scope: Deactivated successfully. May 7 23:57:31.853822 containerd[1467]: time="2025-05-07T23:57:31.853756958Z" level=info msg="shim disconnected" id=a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15 namespace=k8s.io May 7 23:57:31.853822 containerd[1467]: time="2025-05-07T23:57:31.853819274Z" level=warning msg="cleaning up after shim disconnected" id=a9724fe9428a25c82e082a357760d65e94c6958deafcbb432e12bf8948634b15 namespace=k8s.io May 7 23:57:31.853822 containerd[1467]: time="2025-05-07T23:57:31.853829040Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:57:32.255391 containerd[1467]: time="2025-05-07T23:57:32.255346180Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 7 23:57:32.263806 kubelet[2523]: I0507 23:57:32.263727 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfh58" podStartSLOduration=2.263710134 podStartE2EDuration="2.263710134s" podCreationTimestamp="2025-05-07 23:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:31.258829083 +0000 UTC m=+7.142671361" watchObservedRunningTime="2025-05-07 23:57:32.263710134 +0000 UTC m=+8.147552412" May 7 23:57:33.469596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775624822.mount: Deactivated successfully. May 7 23:57:34.915749 containerd[1467]: time="2025-05-07T23:57:34.915706274Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:34.916622 containerd[1467]: time="2025-05-07T23:57:34.916179429Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 7 23:57:34.917475 containerd[1467]: time="2025-05-07T23:57:34.917440296Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:34.923414 containerd[1467]: time="2025-05-07T23:57:34.922437702Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:57:34.923487 containerd[1467]: time="2025-05-07T23:57:34.923429876Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.667837841s" May 7 23:57:34.923487 containerd[1467]: time="2025-05-07T23:57:34.923472217Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 7 23:57:34.925806 containerd[1467]: time="2025-05-07T23:57:34.925779325Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 7 23:57:34.937182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189731215.mount: Deactivated successfully. May 7 23:57:34.938173 containerd[1467]: time="2025-05-07T23:57:34.937875462Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002\"" May 7 23:57:34.938547 containerd[1467]: time="2025-05-07T23:57:34.938482124Z" level=info msg="StartContainer for \"bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002\"" May 7 23:57:34.969504 systemd[1]: Started cri-containerd-bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002.scope - libcontainer container bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002. May 7 23:57:34.992029 containerd[1467]: time="2025-05-07T23:57:34.991989942Z" level=info msg="StartContainer for \"bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002\" returns successfully" May 7 23:57:34.993659 systemd[1]: cri-containerd-bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002.scope: Deactivated successfully. May 7 23:57:35.008863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002-rootfs.mount: Deactivated successfully. May 7 23:57:35.053132 containerd[1467]: time="2025-05-07T23:57:35.053075774Z" level=info msg="shim disconnected" id=bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002 namespace=k8s.io May 7 23:57:35.053297 containerd[1467]: time="2025-05-07T23:57:35.053152491Z" level=warning msg="cleaning up after shim disconnected" id=bbfe4efab4d7335150dd4645a4551c0f17f7bd191d64632dd4ae92079be04002 namespace=k8s.io May 7 23:57:35.053297 containerd[1467]: time="2025-05-07T23:57:35.053162015Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:57:35.058726 kubelet[2523]: I0507 23:57:35.058689 2523 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 7 23:57:35.090554 systemd[1]: Created slice kubepods-burstable-pod0832e7f7_d5bd_4607_a2de_f745ec5ab07b.slice - libcontainer container kubepods-burstable-pod0832e7f7_d5bd_4607_a2de_f745ec5ab07b.slice. May 7 23:57:35.098606 systemd[1]: Created slice kubepods-burstable-pod4353b013_30db_493d_b378_33ba0ae5e23d.slice - libcontainer container kubepods-burstable-pod4353b013_30db_493d_b378_33ba0ae5e23d.slice. May 7 23:57:35.166281 kubelet[2523]: I0507 23:57:35.166120 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsl6\" (UniqueName: \"kubernetes.io/projected/0832e7f7-d5bd-4607-a2de-f745ec5ab07b-kube-api-access-jbsl6\") pod \"coredns-668d6bf9bc-qvk5w\" (UID: \"0832e7f7-d5bd-4607-a2de-f745ec5ab07b\") " pod="kube-system/coredns-668d6bf9bc-qvk5w" May 7 23:57:35.166281 kubelet[2523]: I0507 23:57:35.166177 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0832e7f7-d5bd-4607-a2de-f745ec5ab07b-config-volume\") pod \"coredns-668d6bf9bc-qvk5w\" (UID: \"0832e7f7-d5bd-4607-a2de-f745ec5ab07b\") " pod="kube-system/coredns-668d6bf9bc-qvk5w" May 7 23:57:35.166281 kubelet[2523]: I0507 23:57:35.166196 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4353b013-30db-493d-b378-33ba0ae5e23d-config-volume\") pod \"coredns-668d6bf9bc-v9g6g\" (UID: \"4353b013-30db-493d-b378-33ba0ae5e23d\") " pod="kube-system/coredns-668d6bf9bc-v9g6g" May 7 23:57:35.166281 kubelet[2523]: I0507 23:57:35.166216 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wgt\" (UniqueName: \"kubernetes.io/projected/4353b013-30db-493d-b378-33ba0ae5e23d-kube-api-access-42wgt\") pod \"coredns-668d6bf9bc-v9g6g\" (UID: \"4353b013-30db-493d-b378-33ba0ae5e23d\") " pod="kube-system/coredns-668d6bf9bc-v9g6g" May 7 23:57:35.262896 containerd[1467]: time="2025-05-07T23:57:35.262752813Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 7 23:57:35.281222 containerd[1467]: time="2025-05-07T23:57:35.281172462Z" level=info msg="CreateContainer within sandbox \"628e4d83bd35da077e51290b434858fcfe0e7c926e6afbb2b201c3fbec3c4497\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ecc53f066957e9e8eb6d47f1a428efbc4e20554ea936dbc93aaf7bf86ae8e2a3\"" May 7 23:57:35.281830 containerd[1467]: time="2025-05-07T23:57:35.281805561Z" level=info msg="StartContainer for \"ecc53f066957e9e8eb6d47f1a428efbc4e20554ea936dbc93aaf7bf86ae8e2a3\"" May 7 23:57:35.318519 systemd[1]: Started cri-containerd-ecc53f066957e9e8eb6d47f1a428efbc4e20554ea936dbc93aaf7bf86ae8e2a3.scope - libcontainer container ecc53f066957e9e8eb6d47f1a428efbc4e20554ea936dbc93aaf7bf86ae8e2a3. May 7 23:57:35.340995 containerd[1467]: time="2025-05-07T23:57:35.340866944Z" level=info msg="StartContainer for \"ecc53f066957e9e8eb6d47f1a428efbc4e20554ea936dbc93aaf7bf86ae8e2a3\" returns successfully" May 7 23:57:35.396779 containerd[1467]: time="2025-05-07T23:57:35.396740063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvk5w,Uid:0832e7f7-d5bd-4607-a2de-f745ec5ab07b,Namespace:kube-system,Attempt:0,}" May 7 23:57:35.403743 containerd[1467]: time="2025-05-07T23:57:35.403458632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9g6g,Uid:4353b013-30db-493d-b378-33ba0ae5e23d,Namespace:kube-system,Attempt:0,}" May 7 23:57:35.468182 containerd[1467]: time="2025-05-07T23:57:35.468073876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvk5w,Uid:0832e7f7-d5bd-4607-a2de-f745ec5ab07b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df4019ea110629baff16d073717ed4c9026fbb78e694484e3adb1edc5df65842\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 7 23:57:35.469257 kubelet[2523]: E0507 23:57:35.468856 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4019ea110629baff16d073717ed4c9026fbb78e694484e3adb1edc5df65842\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 7 23:57:35.469257 kubelet[2523]: E0507 23:57:35.468935 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4019ea110629baff16d073717ed4c9026fbb78e694484e3adb1edc5df65842\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-qvk5w" May 7 23:57:35.469257 kubelet[2523]: E0507 23:57:35.468957 2523 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4019ea110629baff16d073717ed4c9026fbb78e694484e3adb1edc5df65842\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-qvk5w" May 7 23:57:35.469538 kubelet[2523]: E0507 23:57:35.469494 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qvk5w_kube-system(0832e7f7-d5bd-4607-a2de-f745ec5ab07b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qvk5w_kube-system(0832e7f7-d5bd-4607-a2de-f745ec5ab07b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4019ea110629baff16d073717ed4c9026fbb78e694484e3adb1edc5df65842\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-qvk5w" podUID="0832e7f7-d5bd-4607-a2de-f745ec5ab07b" May 7 23:57:35.470412 containerd[1467]: time="2025-05-07T23:57:35.470381244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9g6g,Uid:4353b013-30db-493d-b378-33ba0ae5e23d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f91da9d138871906b59ad561d627c0110da6b2b955c0ee9b7e42d89f3ecb035\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 7 23:57:35.470752 kubelet[2523]: E0507 23:57:35.470725 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f91da9d138871906b59ad561d627c0110da6b2b955c0ee9b7e42d89f3ecb035\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 7 23:57:35.470815 kubelet[2523]: E0507 23:57:35.470764 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f91da9d138871906b59ad561d627c0110da6b2b955c0ee9b7e42d89f3ecb035\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-v9g6g" May 7 23:57:35.470815 kubelet[2523]: E0507 23:57:35.470784 2523 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f91da9d138871906b59ad561d627c0110da6b2b955c0ee9b7e42d89f3ecb035\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-v9g6g" May 7 23:57:35.470869 kubelet[2523]: E0507 23:57:35.470814 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v9g6g_kube-system(4353b013-30db-493d-b378-33ba0ae5e23d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v9g6g_kube-system(4353b013-30db-493d-b378-33ba0ae5e23d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f91da9d138871906b59ad561d627c0110da6b2b955c0ee9b7e42d89f3ecb035\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-v9g6g" podUID="4353b013-30db-493d-b378-33ba0ae5e23d" May 7 23:57:36.274763 kubelet[2523]: I0507 23:57:36.274483 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6gdxx" podStartSLOduration=1.9240494350000001 podStartE2EDuration="6.274466686s" podCreationTimestamp="2025-05-07 23:57:30 +0000 UTC" firstStartedPulling="2025-05-07 23:57:30.574186249 +0000 UTC m=+6.458028527" lastFinishedPulling="2025-05-07 23:57:34.92460346 +0000 UTC m=+10.808445778" observedRunningTime="2025-05-07 23:57:36.274208811 +0000 UTC m=+12.158051209" watchObservedRunningTime="2025-05-07 23:57:36.274466686 +0000 UTC m=+12.158309004" May 7 23:57:36.440736 systemd-networkd[1384]: flannel.1: Link UP May 7 23:57:36.440744 systemd-networkd[1384]: flannel.1: Gained carrier May 7 23:57:37.648480 systemd-networkd[1384]: flannel.1: Gained IPv6LL May 7 23:57:39.076407 update_engine[1450]: I20250507 23:57:39.076239 1450 update_attempter.cc:509] Updating boot flags... May 7 23:57:39.096442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3182) May 7 23:57:39.133377 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3185) May 7 23:57:39.159402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3185) May 7 23:57:47.211527 containerd[1467]: time="2025-05-07T23:57:47.211472618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvk5w,Uid:0832e7f7-d5bd-4607-a2de-f745ec5ab07b,Namespace:kube-system,Attempt:0,}" May 7 23:57:47.238240 systemd-networkd[1384]: cni0: Link UP May 7 23:57:47.238247 systemd-networkd[1384]: cni0: Gained carrier May 7 23:57:47.241201 systemd-networkd[1384]: cni0: Lost carrier May 7 23:57:47.243560 systemd-networkd[1384]: veth8bd916c9: Link UP May 7 23:57:47.247627 kernel: cni0: port 1(veth8bd916c9) entered blocking state May 7 23:57:47.247711 kernel: cni0: port 1(veth8bd916c9) entered disabled state May 7 23:57:47.247729 kernel: veth8bd916c9: entered allmulticast mode May 7 23:57:47.249384 kernel: veth8bd916c9: entered promiscuous mode May 7 23:57:47.250952 kernel: cni0: port 1(veth8bd916c9) entered blocking state May 7 23:57:47.251006 kernel: cni0: port 1(veth8bd916c9) entered forwarding state May 7 23:57:47.253377 kernel: cni0: port 1(veth8bd916c9) entered disabled state May 7 23:57:47.264070 kernel: cni0: port 1(veth8bd916c9) entered blocking state May 7 23:57:47.264820 kernel: cni0: port 1(veth8bd916c9) entered forwarding state May 7 23:57:47.264043 systemd-networkd[1384]: veth8bd916c9: Gained carrier May 7 23:57:47.264283 systemd-networkd[1384]: cni0: Gained carrier May 7 23:57:47.266259 containerd[1467]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} May 7 23:57:47.266259 containerd[1467]: delegateAdd: netconf sent to delegate plugin: May 7 23:57:47.286704 containerd[1467]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-07T23:57:47.286272082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:47.286841 containerd[1467]: time="2025-05-07T23:57:47.286715719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:47.286841 containerd[1467]: time="2025-05-07T23:57:47.286733404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:47.286841 containerd[1467]: time="2025-05-07T23:57:47.286814305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:47.301523 systemd[1]: Started cri-containerd-5075702d615b77c85d95c1b5406ba3eb27b5523874a118fb96a797b5f39d7411.scope - libcontainer container 5075702d615b77c85d95c1b5406ba3eb27b5523874a118fb96a797b5f39d7411. May 7 23:57:47.312405 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 7 23:57:47.329513 containerd[1467]: time="2025-05-07T23:57:47.329462763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvk5w,Uid:0832e7f7-d5bd-4607-a2de-f745ec5ab07b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5075702d615b77c85d95c1b5406ba3eb27b5523874a118fb96a797b5f39d7411\"" May 7 23:57:47.332832 containerd[1467]: time="2025-05-07T23:57:47.332796763Z" level=info msg="CreateContainer within sandbox \"5075702d615b77c85d95c1b5406ba3eb27b5523874a118fb96a797b5f39d7411\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:57:47.396813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841923507.mount: Deactivated successfully. May 7 23:57:47.415180 containerd[1467]: time="2025-05-07T23:57:47.415132176Z" level=info msg="CreateContainer within sandbox \"5075702d615b77c85d95c1b5406ba3eb27b5523874a118fb96a797b5f39d7411\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec220c9b06f1ea41192aca4de976717e85d12ed4d8382f6762bf9b634eadcd30\"" May 7 23:57:47.415816 containerd[1467]: time="2025-05-07T23:57:47.415783708Z" level=info msg="StartContainer for \"ec220c9b06f1ea41192aca4de976717e85d12ed4d8382f6762bf9b634eadcd30\"" May 7 23:57:47.437518 systemd[1]: Started cri-containerd-ec220c9b06f1ea41192aca4de976717e85d12ed4d8382f6762bf9b634eadcd30.scope - libcontainer container ec220c9b06f1ea41192aca4de976717e85d12ed4d8382f6762bf9b634eadcd30. May 7 23:57:47.459737 containerd[1467]: time="2025-05-07T23:57:47.459642485Z" level=info msg="StartContainer for \"ec220c9b06f1ea41192aca4de976717e85d12ed4d8382f6762bf9b634eadcd30\" returns successfully" May 7 23:57:48.295935 kubelet[2523]: I0507 23:57:48.295816 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qvk5w" podStartSLOduration=18.295800698 podStartE2EDuration="18.295800698s" podCreationTimestamp="2025-05-07 23:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:48.295297571 +0000 UTC m=+24.179139889" watchObservedRunningTime="2025-05-07 23:57:48.295800698 +0000 UTC m=+24.179643016" May 7 23:57:48.400641 systemd-networkd[1384]: cni0: Gained IPv6LL May 7 23:57:48.720559 systemd-networkd[1384]: veth8bd916c9: Gained IPv6LL May 7 23:57:49.210998 containerd[1467]: time="2025-05-07T23:57:49.210956823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9g6g,Uid:4353b013-30db-493d-b378-33ba0ae5e23d,Namespace:kube-system,Attempt:0,}" May 7 23:57:49.241377 systemd-networkd[1384]: vethe127c7b3: Link UP May 7 23:57:49.244732 kernel: cni0: port 2(vethe127c7b3) entered blocking state May 7 23:57:49.244795 kernel: cni0: port 2(vethe127c7b3) entered disabled state May 7 23:57:49.244812 kernel: vethe127c7b3: entered allmulticast mode May 7 23:57:49.248380 kernel: vethe127c7b3: entered promiscuous mode May 7 23:57:49.264147 kernel: cni0: port 2(vethe127c7b3) entered blocking state May 7 23:57:49.264235 kernel: cni0: port 2(vethe127c7b3) entered forwarding state May 7 23:57:49.264281 systemd-networkd[1384]: vethe127c7b3: Gained carrier May 7 23:57:49.267903 containerd[1467]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 7 23:57:49.267903 containerd[1467]: delegateAdd: netconf sent to delegate plugin: May 7 23:57:49.292440 containerd[1467]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-07T23:57:49.292261094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:57:49.292440 containerd[1467]: time="2025-05-07T23:57:49.292323909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:57:49.292440 containerd[1467]: time="2025-05-07T23:57:49.292347955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:49.292929 containerd[1467]: time="2025-05-07T23:57:49.292828311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:57:49.319608 systemd[1]: Started cri-containerd-efd7daf46d8971604aef4dd9626f45f874ab6462951963f7cefbe60c0c213d0d.scope - libcontainer container efd7daf46d8971604aef4dd9626f45f874ab6462951963f7cefbe60c0c213d0d. May 7 23:57:49.343738 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 7 23:57:49.359501 containerd[1467]: time="2025-05-07T23:57:49.359453824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9g6g,Uid:4353b013-30db-493d-b378-33ba0ae5e23d,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd7daf46d8971604aef4dd9626f45f874ab6462951963f7cefbe60c0c213d0d\"" May 7 23:57:49.378297 containerd[1467]: time="2025-05-07T23:57:49.378158999Z" level=info msg="CreateContainer within sandbox \"efd7daf46d8971604aef4dd9626f45f874ab6462951963f7cefbe60c0c213d0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:57:49.388813 containerd[1467]: time="2025-05-07T23:57:49.388782334Z" level=info msg="CreateContainer within sandbox \"efd7daf46d8971604aef4dd9626f45f874ab6462951963f7cefbe60c0c213d0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"573588b9b3291a3def101a5057116aa98360768c424583499d6e58b66c3883d5\"" May 7 23:57:49.390101 containerd[1467]: time="2025-05-07T23:57:49.389392802Z" level=info msg="StartContainer for \"573588b9b3291a3def101a5057116aa98360768c424583499d6e58b66c3883d5\"" May 7 23:57:49.421489 systemd[1]: Started cri-containerd-573588b9b3291a3def101a5057116aa98360768c424583499d6e58b66c3883d5.scope - libcontainer container 573588b9b3291a3def101a5057116aa98360768c424583499d6e58b66c3883d5. May 7 23:57:49.442457 containerd[1467]: time="2025-05-07T23:57:49.442417377Z" level=info msg="StartContainer for \"573588b9b3291a3def101a5057116aa98360768c424583499d6e58b66c3883d5\" returns successfully" May 7 23:57:49.738443 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:34482.service - OpenSSH per-connection server daemon (10.0.0.1:34482). May 7 23:57:49.785754 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 34482 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:57:49.787289 sshd-session[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:57:49.803473 systemd-logind[1448]: New session 6 of user core. May 7 23:57:49.814513 systemd[1]: Started session-6.scope - Session 6 of User core. May 7 23:57:49.939388 sshd[3478]: Connection closed by 10.0.0.1 port 34482 May 7 23:57:49.940584 sshd-session[3476]: pam_unix(sshd:session): session closed for user core May 7 23:57:49.943789 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:34482.service: Deactivated successfully. May 7 23:57:49.945751 systemd[1]: session-6.scope: Deactivated successfully. May 7 23:57:49.946598 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. May 7 23:57:49.947583 systemd-logind[1448]: Removed session 6. May 7 23:57:50.313066 kubelet[2523]: I0507 23:57:50.312837 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v9g6g" podStartSLOduration=20.312819504 podStartE2EDuration="20.312819504s" podCreationTimestamp="2025-05-07 23:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:57:50.312692595 +0000 UTC m=+26.196534913" watchObservedRunningTime="2025-05-07 23:57:50.312819504 +0000 UTC m=+26.196661822" May 7 23:57:50.768498 systemd-networkd[1384]: vethe127c7b3: Gained IPv6LL May 7 23:57:54.952158 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). May 7 23:57:54.992834 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:57:54.994056 sshd-session[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:57:54.997741 systemd-logind[1448]: New session 7 of user core. May 7 23:57:55.007518 systemd[1]: Started session-7.scope - Session 7 of User core. May 7 23:57:55.117267 sshd[3522]: Connection closed by 10.0.0.1 port 44730 May 7 23:57:55.117627 sshd-session[3520]: pam_unix(sshd:session): session closed for user core May 7 23:57:55.120941 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:44730.service: Deactivated successfully. May 7 23:57:55.124089 systemd[1]: session-7.scope: Deactivated successfully. May 7 23:57:55.124821 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. May 7 23:57:55.126723 systemd-logind[1448]: Removed session 7. May 7 23:58:00.133949 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:44744.service - OpenSSH per-connection server daemon (10.0.0.1:44744). May 7 23:58:00.174438 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 44744 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:00.175141 sshd-session[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:00.181417 systemd-logind[1448]: New session 8 of user core. May 7 23:58:00.187568 systemd[1]: Started session-8.scope - Session 8 of User core. May 7 23:58:00.296947 sshd[3559]: Connection closed by 10.0.0.1 port 44744 May 7 23:58:00.297257 sshd-session[3557]: pam_unix(sshd:session): session closed for user core May 7 23:58:00.312707 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:44744.service: Deactivated successfully. May 7 23:58:00.314282 systemd[1]: session-8.scope: Deactivated successfully. May 7 23:58:00.315681 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. May 7 23:58:00.316741 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:44758.service - OpenSSH per-connection server daemon (10.0.0.1:44758). May 7 23:58:00.318065 systemd-logind[1448]: Removed session 8. May 7 23:58:00.357257 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 44758 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:00.358022 sshd-session[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:00.362781 systemd-logind[1448]: New session 9 of user core. May 7 23:58:00.371482 systemd[1]: Started session-9.scope - Session 9 of User core. May 7 23:58:00.511857 sshd[3576]: Connection closed by 10.0.0.1 port 44758 May 7 23:58:00.512221 sshd-session[3573]: pam_unix(sshd:session): session closed for user core May 7 23:58:00.527243 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:44758.service: Deactivated successfully. May 7 23:58:00.530487 systemd[1]: session-9.scope: Deactivated successfully. May 7 23:58:00.531394 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. May 7 23:58:00.538637 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:44764.service - OpenSSH per-connection server daemon (10.0.0.1:44764). May 7 23:58:00.542063 systemd-logind[1448]: Removed session 9. May 7 23:58:00.577740 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 44764 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:00.578880 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:00.582563 systemd-logind[1448]: New session 10 of user core. May 7 23:58:00.592488 systemd[1]: Started session-10.scope - Session 10 of User core. May 7 23:58:00.705816 sshd[3589]: Connection closed by 10.0.0.1 port 44764 May 7 23:58:00.706332 sshd-session[3586]: pam_unix(sshd:session): session closed for user core May 7 23:58:00.709488 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:44764.service: Deactivated successfully. May 7 23:58:00.711235 systemd[1]: session-10.scope: Deactivated successfully. May 7 23:58:00.711884 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. May 7 23:58:00.712718 systemd-logind[1448]: Removed session 10. May 7 23:58:05.718556 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). May 7 23:58:05.758790 sshd[3626]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:05.759807 sshd-session[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:05.763212 systemd-logind[1448]: New session 11 of user core. May 7 23:58:05.780497 systemd[1]: Started session-11.scope - Session 11 of User core. May 7 23:58:05.889382 sshd[3628]: Connection closed by 10.0.0.1 port 60474 May 7 23:58:05.889900 sshd-session[3626]: pam_unix(sshd:session): session closed for user core May 7 23:58:05.904606 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:60474.service: Deactivated successfully. May 7 23:58:05.906149 systemd[1]: session-11.scope: Deactivated successfully. May 7 23:58:05.906800 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. May 7 23:58:05.916713 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:60490.service - OpenSSH per-connection server daemon (10.0.0.1:60490). May 7 23:58:05.920209 systemd-logind[1448]: Removed session 11. May 7 23:58:05.954662 sshd[3640]: Accepted publickey for core from 10.0.0.1 port 60490 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:05.955675 sshd-session[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:05.959065 systemd-logind[1448]: New session 12 of user core. May 7 23:58:05.973537 systemd[1]: Started session-12.scope - Session 12 of User core. May 7 23:58:06.156385 sshd[3643]: Connection closed by 10.0.0.1 port 60490 May 7 23:58:06.156888 sshd-session[3640]: pam_unix(sshd:session): session closed for user core May 7 23:58:06.176883 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:60490.service: Deactivated successfully. May 7 23:58:06.180068 systemd[1]: session-12.scope: Deactivated successfully. May 7 23:58:06.181057 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. May 7 23:58:06.191736 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:60502.service - OpenSSH per-connection server daemon (10.0.0.1:60502). May 7 23:58:06.192784 systemd-logind[1448]: Removed session 12. May 7 23:58:06.231588 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 60502 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:06.232792 sshd-session[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:06.236491 systemd-logind[1448]: New session 13 of user core. May 7 23:58:06.246558 systemd[1]: Started session-13.scope - Session 13 of User core. May 7 23:58:06.926822 sshd[3656]: Connection closed by 10.0.0.1 port 60502 May 7 23:58:06.927243 sshd-session[3653]: pam_unix(sshd:session): session closed for user core May 7 23:58:06.938677 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:60502.service: Deactivated successfully. May 7 23:58:06.941119 systemd[1]: session-13.scope: Deactivated successfully. May 7 23:58:06.944142 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. May 7 23:58:06.952667 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:60512.service - OpenSSH per-connection server daemon (10.0.0.1:60512). May 7 23:58:06.954788 systemd-logind[1448]: Removed session 13. May 7 23:58:06.990261 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 60512 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:06.991415 sshd-session[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:06.995407 systemd-logind[1448]: New session 14 of user core. May 7 23:58:07.003566 systemd[1]: Started session-14.scope - Session 14 of User core. May 7 23:58:07.213640 sshd[3698]: Connection closed by 10.0.0.1 port 60512 May 7 23:58:07.214230 sshd-session[3695]: pam_unix(sshd:session): session closed for user core May 7 23:58:07.225825 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:60512.service: Deactivated successfully. May 7 23:58:07.227292 systemd[1]: session-14.scope: Deactivated successfully. May 7 23:58:07.229674 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. May 7 23:58:07.234805 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:60524.service - OpenSSH per-connection server daemon (10.0.0.1:60524). May 7 23:58:07.237737 systemd-logind[1448]: Removed session 14. May 7 23:58:07.271942 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 60524 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:07.273127 sshd-session[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:07.278238 systemd-logind[1448]: New session 15 of user core. May 7 23:58:07.287549 systemd[1]: Started session-15.scope - Session 15 of User core. May 7 23:58:07.392733 sshd[3712]: Connection closed by 10.0.0.1 port 60524 May 7 23:58:07.393462 sshd-session[3709]: pam_unix(sshd:session): session closed for user core May 7 23:58:07.396709 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:60524.service: Deactivated successfully. May 7 23:58:07.398663 systemd[1]: session-15.scope: Deactivated successfully. May 7 23:58:07.399263 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. May 7 23:58:07.400048 systemd-logind[1448]: Removed session 15. May 7 23:58:12.408601 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:60530.service - OpenSSH per-connection server daemon (10.0.0.1:60530). May 7 23:58:12.453839 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 60530 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:12.455199 sshd-session[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:12.458793 systemd-logind[1448]: New session 16 of user core. May 7 23:58:12.463503 systemd[1]: Started session-16.scope - Session 16 of User core. May 7 23:58:12.571646 sshd[3750]: Connection closed by 10.0.0.1 port 60530 May 7 23:58:12.571503 sshd-session[3748]: pam_unix(sshd:session): session closed for user core May 7 23:58:12.574970 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:60530.service: Deactivated successfully. May 7 23:58:12.576710 systemd[1]: session-16.scope: Deactivated successfully. May 7 23:58:12.577287 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. May 7 23:58:12.578025 systemd-logind[1448]: Removed session 16. May 7 23:58:17.583508 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:46500.service - OpenSSH per-connection server daemon (10.0.0.1:46500). May 7 23:58:17.623496 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 46500 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:17.624512 sshd-session[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:17.627862 systemd-logind[1448]: New session 17 of user core. May 7 23:58:17.644494 systemd[1]: Started session-17.scope - Session 17 of User core. May 7 23:58:17.750965 sshd[3787]: Connection closed by 10.0.0.1 port 46500 May 7 23:58:17.751277 sshd-session[3785]: pam_unix(sshd:session): session closed for user core May 7 23:58:17.754548 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:46500.service: Deactivated successfully. May 7 23:58:17.756480 systemd[1]: session-17.scope: Deactivated successfully. May 7 23:58:17.757224 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. May 7 23:58:17.757975 systemd-logind[1448]: Removed session 17. May 7 23:58:22.765509 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:40440.service - OpenSSH per-connection server daemon (10.0.0.1:40440). May 7 23:58:22.809390 sshd[3821]: Accepted publickey for core from 10.0.0.1 port 40440 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:58:22.810425 sshd-session[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:58:22.813885 systemd-logind[1448]: New session 18 of user core. May 7 23:58:22.826604 systemd[1]: Started session-18.scope - Session 18 of User core. May 7 23:58:22.935404 sshd[3823]: Connection closed by 10.0.0.1 port 40440 May 7 23:58:22.935073 sshd-session[3821]: pam_unix(sshd:session): session closed for user core May 7 23:58:22.938232 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:40440.service: Deactivated successfully. May 7 23:58:22.940435 systemd[1]: session-18.scope: Deactivated successfully. May 7 23:58:22.941123 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. May 7 23:58:22.941913 systemd-logind[1448]: Removed session 18.