May 8 00:33:03.913610 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:33:03.913632 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:33:03.913642 kernel: KASLR enabled May 8 00:33:03.913647 kernel: efi: EFI v2.7 by EDK II May 8 00:33:03.913653 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:33:03.913659 kernel: random: crng init done May 8 00:33:03.913666 kernel: ACPI: Early table checksum verification disabled May 8 00:33:03.913671 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:33:03.913677 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:33:03.913685 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913691 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913697 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913703 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913709 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913716 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913724 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913731 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913737 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:33:03.913743 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:33:03.913749 kernel: NUMA: Failed to initialise from firmware May 8 00:33:03.913756 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:33:03.913762 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:33:03.913768 kernel: Zone ranges: May 8 00:33:03.913774 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:33:03.913780 kernel: DMA32 empty May 8 00:33:03.913788 kernel: Normal empty May 8 00:33:03.913794 kernel: Movable zone start for each node May 8 00:33:03.913800 kernel: Early memory node ranges May 8 00:33:03.913806 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:33:03.913812 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:33:03.913819 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:33:03.913825 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:33:03.913831 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:33:03.913837 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:33:03.913844 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:33:03.913850 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:33:03.913856 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:33:03.913863 kernel: psci: probing for conduit method from ACPI. May 8 00:33:03.913870 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:33:03.913876 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:33:03.913885 kernel: psci: Trusted OS migration not required May 8 00:33:03.913891 kernel: psci: SMC Calling Convention v1.1 May 8 00:33:03.913898 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:33:03.913907 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:33:03.913913 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:33:03.913920 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:33:03.913927 kernel: Detected PIPT I-cache on CPU0 May 8 00:33:03.913933 kernel: CPU features: detected: GIC system register CPU interface May 8 00:33:03.913940 kernel: CPU features: detected: Hardware dirty bit management May 8 00:33:03.913947 kernel: CPU features: detected: Spectre-v4 May 8 00:33:03.913953 kernel: CPU features: detected: Spectre-BHB May 8 00:33:03.913960 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:33:03.913967 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:33:03.913975 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:33:03.913981 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:33:03.913988 kernel: alternatives: applying boot alternatives May 8 00:33:03.913996 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:33:03.914003 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:33:03.914010 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:33:03.914016 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:33:03.914023 kernel: Fallback order for Node 0: 0 May 8 00:33:03.914030 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:33:03.914036 kernel: Policy zone: DMA May 8 00:33:03.914043 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:33:03.914050 kernel: software IO TLB: area num 4. May 8 00:33:03.914057 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:33:03.914064 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:33:03.914071 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:33:03.914078 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:33:03.914085 kernel: rcu: RCU event tracing is enabled. May 8 00:33:03.914092 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:33:03.914099 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:33:03.914105 kernel: Tracing variant of Tasks RCU enabled. May 8 00:33:03.914112 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:33:03.914119 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:33:03.914131 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:33:03.914141 kernel: GICv3: 256 SPIs implemented May 8 00:33:03.914147 kernel: GICv3: 0 Extended SPIs implemented May 8 00:33:03.914154 kernel: Root IRQ handler: gic_handle_irq May 8 00:33:03.914161 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:33:03.914168 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:33:03.914175 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:33:03.914181 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:33:03.914189 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:33:03.914195 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:33:03.914202 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:33:03.914221 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:33:03.914229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:33:03.914236 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:33:03.914244 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:33:03.914250 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:33:03.914257 kernel: arm-pv: using stolen time PV May 8 00:33:03.914264 kernel: Console: colour dummy device 80x25 May 8 00:33:03.914271 kernel: ACPI: Core revision 20230628 May 8 00:33:03.914278 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:33:03.914285 kernel: pid_max: default: 32768 minimum: 301 May 8 00:33:03.914292 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:33:03.914301 kernel: landlock: Up and running. May 8 00:33:03.914308 kernel: SELinux: Initializing. May 8 00:33:03.914315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:33:03.914322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:33:03.914329 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:33:03.914336 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:33:03.914343 kernel: rcu: Hierarchical SRCU implementation. May 8 00:33:03.914350 kernel: rcu: Max phase no-delay instances is 400. May 8 00:33:03.914357 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:33:03.914365 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:33:03.914372 kernel: Remapping and enabling EFI services. May 8 00:33:03.914379 kernel: smp: Bringing up secondary CPUs ... May 8 00:33:03.914386 kernel: Detected PIPT I-cache on CPU1 May 8 00:33:03.914393 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:33:03.914400 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:33:03.914406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:33:03.914413 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:33:03.914420 kernel: Detected PIPT I-cache on CPU2 May 8 00:33:03.914427 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:33:03.914435 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:33:03.914442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:33:03.914454 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:33:03.914462 kernel: Detected PIPT I-cache on CPU3 May 8 00:33:03.914470 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:33:03.914484 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:33:03.914492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:33:03.914499 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:33:03.914506 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:33:03.914516 kernel: SMP: Total of 4 processors activated. May 8 00:33:03.914523 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:33:03.914530 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:33:03.914538 kernel: CPU features: detected: Common not Private translations May 8 00:33:03.914545 kernel: CPU features: detected: CRC32 instructions May 8 00:33:03.914552 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:33:03.914559 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:33:03.914566 kernel: CPU features: detected: LSE atomic instructions May 8 00:33:03.914575 kernel: CPU features: detected: Privileged Access Never May 8 00:33:03.914582 kernel: CPU features: detected: RAS Extension Support May 8 00:33:03.914589 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:33:03.914597 kernel: CPU: All CPU(s) started at EL1 May 8 00:33:03.914604 kernel: alternatives: applying system-wide alternatives May 8 00:33:03.914611 kernel: devtmpfs: initialized May 8 00:33:03.914618 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:33:03.914626 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:33:03.914633 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:33:03.914641 kernel: SMBIOS 3.0.0 present. May 8 00:33:03.914648 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:33:03.914655 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:33:03.914663 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:33:03.914670 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:33:03.914677 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:33:03.914684 kernel: audit: initializing netlink subsys (disabled) May 8 00:33:03.914692 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 00:33:03.914699 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:33:03.914708 kernel: cpuidle: using governor menu May 8 00:33:03.914715 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:33:03.914722 kernel: ASID allocator initialised with 32768 entries May 8 00:33:03.914729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:33:03.914737 kernel: Serial: AMBA PL011 UART driver May 8 00:33:03.914744 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:33:03.914751 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:33:03.914758 kernel: Modules: 509024 pages in range for PLT usage May 8 00:33:03.914765 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:33:03.914774 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:33:03.914781 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:33:03.914788 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:33:03.914795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:33:03.914803 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:33:03.914810 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:33:03.914817 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:33:03.914824 kernel: ACPI: Added _OSI(Module Device) May 8 00:33:03.914831 kernel: ACPI: Added _OSI(Processor Device) May 8 00:33:03.914840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:33:03.914847 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:33:03.914854 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:33:03.914861 kernel: ACPI: Interpreter enabled May 8 00:33:03.914868 kernel: ACPI: Using GIC for interrupt routing May 8 00:33:03.914875 kernel: ACPI: MCFG table detected, 1 entries May 8 00:33:03.914883 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:33:03.914890 kernel: printk: console [ttyAMA0] enabled May 8 00:33:03.914897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:33:03.915040 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:33:03.915114 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:33:03.915187 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:33:03.915251 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:33:03.915317 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:33:03.915326 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:33:03.915334 kernel: PCI host bridge to bus 0000:00 May 8 00:33:03.915406 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:33:03.915466 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:33:03.915606 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:33:03.915668 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:33:03.915751 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:33:03.915827 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:33:03.915898 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:33:03.915966 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:33:03.916032 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:33:03.916099 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:33:03.916174 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:33:03.916242 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:33:03.916302 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:33:03.916363 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:33:03.916422 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:33:03.916431 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:33:03.916439 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:33:03.916446 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:33:03.916453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:33:03.916460 kernel: iommu: Default domain type: Translated May 8 00:33:03.916468 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:33:03.916475 kernel: efivars: Registered efivars operations May 8 00:33:03.916493 kernel: vgaarb: loaded May 8 00:33:03.916500 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:33:03.916507 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:33:03.916515 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:33:03.916522 kernel: pnp: PnP ACPI init May 8 00:33:03.916605 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:33:03.916616 kernel: pnp: PnP ACPI: found 1 devices May 8 00:33:03.916624 kernel: NET: Registered PF_INET protocol family May 8 00:33:03.916634 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:33:03.916642 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:33:03.916649 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:33:03.916657 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:33:03.916664 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:33:03.916671 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:33:03.916679 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:33:03.916686 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:33:03.916693 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:33:03.916702 kernel: PCI: CLS 0 bytes, default 64 May 8 00:33:03.916709 kernel: kvm [1]: HYP mode not available May 8 00:33:03.916716 kernel: Initialise system trusted keyrings May 8 00:33:03.916724 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:33:03.916731 kernel: Key type asymmetric registered May 8 00:33:03.916738 kernel: Asymmetric key parser 'x509' registered May 8 00:33:03.916745 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:33:03.916753 kernel: io scheduler mq-deadline registered May 8 00:33:03.916760 kernel: io scheduler kyber registered May 8 00:33:03.916768 kernel: io scheduler bfq registered May 8 00:33:03.916776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:33:03.916783 kernel: ACPI: button: Power Button [PWRB] May 8 00:33:03.916791 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:33:03.916858 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:33:03.916868 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:33:03.916876 kernel: thunder_xcv, ver 1.0 May 8 00:33:03.916883 kernel: thunder_bgx, ver 1.0 May 8 00:33:03.916890 kernel: nicpf, ver 1.0 May 8 00:33:03.916899 kernel: nicvf, ver 1.0 May 8 00:33:03.916989 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:33:03.917055 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:33:03 UTC (1746664383) May 8 00:33:03.917065 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:33:03.917073 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:33:03.917080 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:33:03.917087 kernel: watchdog: Hard watchdog permanently disabled May 8 00:33:03.917095 kernel: NET: Registered PF_INET6 protocol family May 8 00:33:03.917105 kernel: Segment Routing with IPv6 May 8 00:33:03.917112 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:33:03.917119 kernel: NET: Registered PF_PACKET protocol family May 8 00:33:03.917131 kernel: Key type dns_resolver registered May 8 00:33:03.917139 kernel: registered taskstats version 1 May 8 00:33:03.917146 kernel: Loading compiled-in X.509 certificates May 8 00:33:03.917153 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:33:03.917161 kernel: Key type .fscrypt registered May 8 00:33:03.917168 kernel: Key type fscrypt-provisioning registered May 8 00:33:03.917177 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:33:03.917184 kernel: ima: Allocated hash algorithm: sha1 May 8 00:33:03.917191 kernel: ima: No architecture policies found May 8 00:33:03.917199 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:33:03.917206 kernel: clk: Disabling unused clocks May 8 00:33:03.917213 kernel: Freeing unused kernel memory: 39424K May 8 00:33:03.917220 kernel: Run /init as init process May 8 00:33:03.917227 kernel: with arguments: May 8 00:33:03.917234 kernel: /init May 8 00:33:03.917243 kernel: with environment: May 8 00:33:03.917250 kernel: HOME=/ May 8 00:33:03.917257 kernel: TERM=linux May 8 00:33:03.917264 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:33:03.917273 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:33:03.917282 systemd[1]: Detected virtualization kvm. May 8 00:33:03.917290 systemd[1]: Detected architecture arm64. May 8 00:33:03.917298 systemd[1]: Running in initrd. May 8 00:33:03.917307 systemd[1]: No hostname configured, using default hostname. May 8 00:33:03.917314 systemd[1]: Hostname set to . May 8 00:33:03.917322 systemd[1]: Initializing machine ID from VM UUID. May 8 00:33:03.917330 systemd[1]: Queued start job for default target initrd.target. May 8 00:33:03.917338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:33:03.917346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:33:03.917354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:33:03.917361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:33:03.917371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:33:03.917379 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:33:03.917388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:33:03.917396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:33:03.917404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:33:03.917412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:33:03.917421 systemd[1]: Reached target paths.target - Path Units. May 8 00:33:03.917429 systemd[1]: Reached target slices.target - Slice Units. May 8 00:33:03.917436 systemd[1]: Reached target swap.target - Swaps. May 8 00:33:03.917444 systemd[1]: Reached target timers.target - Timer Units. May 8 00:33:03.917452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:33:03.917459 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:33:03.917467 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:33:03.917475 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:33:03.917507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:33:03.917518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:33:03.917526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:33:03.917534 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:33:03.917542 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:33:03.917550 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:33:03.917557 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:33:03.917565 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:33:03.917573 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:33:03.917581 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:33:03.917591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:33:03.917598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:33:03.917606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:33:03.917614 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:33:03.917623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:33:03.917652 systemd-journald[237]: Collecting audit messages is disabled. May 8 00:33:03.917671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:33:03.917680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:33:03.917690 systemd-journald[237]: Journal started May 8 00:33:03.917708 systemd-journald[237]: Runtime Journal (/run/log/journal/aba024810bf641b192d7cdbb281c5c0d) is 5.9M, max 47.3M, 41.4M free. May 8 00:33:03.909987 systemd-modules-load[238]: Inserted module 'overlay' May 8 00:33:03.920528 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:33:03.921754 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:33:03.926392 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:33:03.926412 kernel: Bridge firewalling registered May 8 00:33:03.926259 systemd-modules-load[238]: Inserted module 'br_netfilter' May 8 00:33:03.927208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:33:03.929544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:33:03.930831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:33:03.934619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:33:03.939312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:33:03.944689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:33:03.947383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:33:03.948776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:33:03.962642 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:33:03.964957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:33:03.972037 dracut-cmdline[277]: dracut-dracut-053 May 8 00:33:03.974511 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:33:03.991408 systemd-resolved[279]: Positive Trust Anchors: May 8 00:33:03.991426 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:33:03.991457 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:33:03.996083 systemd-resolved[279]: Defaulting to hostname 'linux'. May 8 00:33:03.998868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:33:04.000614 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:33:04.036504 kernel: SCSI subsystem initialized May 8 00:33:04.040494 kernel: Loading iSCSI transport class v2.0-870. May 8 00:33:04.048518 kernel: iscsi: registered transport (tcp) May 8 00:33:04.061599 kernel: iscsi: registered transport (qla4xxx) May 8 00:33:04.061660 kernel: QLogic iSCSI HBA Driver May 8 00:33:04.102323 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:33:04.113637 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:33:04.128808 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:33:04.128869 kernel: device-mapper: uevent: version 1.0.3 May 8 00:33:04.130109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:33:04.178545 kernel: raid6: neonx8 gen() 11925 MB/s May 8 00:33:04.195511 kernel: raid6: neonx4 gen() 15638 MB/s May 8 00:33:04.212505 kernel: raid6: neonx2 gen() 13215 MB/s May 8 00:33:04.229502 kernel: raid6: neonx1 gen() 10489 MB/s May 8 00:33:04.246500 kernel: raid6: int64x8 gen() 6955 MB/s May 8 00:33:04.263500 kernel: raid6: int64x4 gen() 7349 MB/s May 8 00:33:04.280502 kernel: raid6: int64x2 gen() 6128 MB/s May 8 00:33:04.297673 kernel: raid6: int64x1 gen() 5059 MB/s May 8 00:33:04.297691 kernel: raid6: using algorithm neonx4 gen() 15638 MB/s May 8 00:33:04.315700 kernel: raid6: .... xor() 12097 MB/s, rmw enabled May 8 00:33:04.315716 kernel: raid6: using neon recovery algorithm May 8 00:33:04.321916 kernel: xor: measuring software checksum speed May 8 00:33:04.321932 kernel: 8regs : 19783 MB/sec May 8 00:33:04.321942 kernel: 32regs : 19679 MB/sec May 8 00:33:04.322549 kernel: arm64_neon : 26892 MB/sec May 8 00:33:04.322561 kernel: xor: using function: arm64_neon (26892 MB/sec) May 8 00:33:04.374503 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:33:04.387766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:33:04.395640 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:33:04.408087 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 8 00:33:04.411333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:33:04.435645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:33:04.447283 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 8 00:33:04.476339 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:33:04.484647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:33:04.527849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:33:04.536681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:33:04.549583 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:33:04.551183 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:33:04.552877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:33:04.555234 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:33:04.561615 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:33:04.573787 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:33:04.579510 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:33:04.591728 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:33:04.591830 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:33:04.591841 kernel: GPT:9289727 != 19775487 May 8 00:33:04.591850 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:33:04.591859 kernel: GPT:9289727 != 19775487 May 8 00:33:04.591868 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:33:04.591884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:33:04.585889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:33:04.585994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:33:04.590546 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:33:04.592018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:33:04.592161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:33:04.593369 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:33:04.601723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:33:04.612911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (515) May 8 00:33:04.615967 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) May 8 00:33:04.615220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:33:04.617800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:33:04.629042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:33:04.633961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:33:04.638105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:33:04.639306 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:33:04.652621 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:33:04.654405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:33:04.660193 disk-uuid[555]: Primary Header is updated. May 8 00:33:04.660193 disk-uuid[555]: Secondary Entries is updated. May 8 00:33:04.660193 disk-uuid[555]: Secondary Header is updated. May 8 00:33:04.663502 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:33:04.678205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:33:05.676654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:33:05.676761 disk-uuid[556]: The operation has completed successfully. May 8 00:33:05.702731 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:33:05.702831 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:33:05.726631 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:33:05.729687 sh[579]: Success May 8 00:33:05.743512 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:33:05.771357 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:33:05.783782 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:33:05.785534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:33:05.794512 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:33:05.794548 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:33:05.794559 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:33:05.796973 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:33:05.796989 kernel: BTRFS info (device dm-0): using free space tree May 8 00:33:05.800087 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:33:05.801414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:33:05.814668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:33:05.816235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:33:05.825205 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:33:05.825252 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:33:05.825980 kernel: BTRFS info (device vda6): using free space tree May 8 00:33:05.828495 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:33:05.835817 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:33:05.837517 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:33:05.843003 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:33:05.849650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:33:05.909224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:33:05.919641 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:33:05.945322 systemd-networkd[771]: lo: Link UP May 8 00:33:05.945334 systemd-networkd[771]: lo: Gained carrier May 8 00:33:05.946023 systemd-networkd[771]: Enumeration completed May 8 00:33:05.946299 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:33:05.946474 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:33:05.946513 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:33:05.947570 systemd-networkd[771]: eth0: Link UP May 8 00:33:05.947573 systemd-networkd[771]: eth0: Gained carrier May 8 00:33:05.947581 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:33:05.949291 systemd[1]: Reached target network.target - Network. May 8 00:33:05.960092 ignition[674]: Ignition 2.19.0 May 8 00:33:05.960101 ignition[674]: Stage: fetch-offline May 8 00:33:05.960140 ignition[674]: no configs at "/usr/lib/ignition/base.d" May 8 00:33:05.960149 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:05.962040 ignition[674]: parsed url from cmdline: "" May 8 00:33:05.962044 ignition[674]: no config URL provided May 8 00:33:05.962048 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:33:05.962055 ignition[674]: no config at "/usr/lib/ignition/user.ign" May 8 00:33:05.962077 ignition[674]: op(1): [started] loading QEMU firmware config module May 8 00:33:05.962082 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:33:05.967551 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:33:05.970786 ignition[674]: op(1): [finished] loading QEMU firmware config module May 8 00:33:05.991059 ignition[674]: parsing config with SHA512: 86e6a4cee6521f725c66fb86231a3751ae583b5c48eac3bc956da98c4066337fcaf11bb17b3d34f64bc6c48cf3aed65361d05e5c1df6ae46d9db865779019027 May 8 00:33:05.994970 unknown[674]: fetched base config from "system" May 8 00:33:05.994979 unknown[674]: fetched user config from "qemu" May 8 00:33:05.996564 ignition[674]: fetch-offline: fetch-offline passed May 8 00:33:05.996669 ignition[674]: Ignition finished successfully May 8 00:33:05.998993 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:33:06.000308 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:33:06.016659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:33:06.026712 ignition[777]: Ignition 2.19.0 May 8 00:33:06.026722 ignition[777]: Stage: kargs May 8 00:33:06.026885 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 8 00:33:06.026893 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:06.027735 ignition[777]: kargs: kargs passed May 8 00:33:06.031065 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:33:06.027780 ignition[777]: Ignition finished successfully May 8 00:33:06.046676 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:33:06.056270 ignition[785]: Ignition 2.19.0 May 8 00:33:06.056279 ignition[785]: Stage: disks May 8 00:33:06.056442 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 8 00:33:06.059199 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:33:06.056451 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:06.060513 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:33:06.057298 ignition[785]: disks: disks passed May 8 00:33:06.062221 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:33:06.057341 ignition[785]: Ignition finished successfully May 8 00:33:06.064288 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:33:06.066164 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:33:06.067605 systemd[1]: Reached target basic.target - Basic System. May 8 00:33:06.079710 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:33:06.089737 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:33:06.095547 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:33:06.112608 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:33:06.161310 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:33:06.162834 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:33:06.162591 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:33:06.182596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:33:06.185440 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:33:06.186421 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:33:06.186459 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:33:06.193625 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) May 8 00:33:06.186492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:33:06.192915 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:33:06.200433 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:33:06.200456 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:33:06.200466 kernel: BTRFS info (device vda6): using free space tree May 8 00:33:06.195182 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:33:06.202367 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:33:06.204080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:33:06.247067 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:33:06.251811 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 8 00:33:06.255497 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:33:06.258406 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:33:06.339892 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:33:06.348616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:33:06.351192 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:33:06.356517 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:33:06.375104 ignition[917]: INFO : Ignition 2.19.0 May 8 00:33:06.375104 ignition[917]: INFO : Stage: mount May 8 00:33:06.376788 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:33:06.376788 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:06.376788 ignition[917]: INFO : mount: mount passed May 8 00:33:06.376788 ignition[917]: INFO : Ignition finished successfully May 8 00:33:06.376533 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:33:06.378739 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:33:06.388591 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:33:06.793540 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:33:06.806733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:33:06.813552 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) May 8 00:33:06.813595 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:33:06.813616 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:33:06.815068 kernel: BTRFS info (device vda6): using free space tree May 8 00:33:06.817493 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:33:06.818398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:33:06.834497 ignition[947]: INFO : Ignition 2.19.0 May 8 00:33:06.834497 ignition[947]: INFO : Stage: files May 8 00:33:06.836225 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:33:06.836225 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:06.836225 ignition[947]: DEBUG : files: compiled without relabeling support, skipping May 8 00:33:06.839707 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:33:06.839707 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:33:06.839707 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:33:06.839707 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:33:06.839707 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:33:06.838773 unknown[947]: wrote ssh authorized keys file for user: core May 8 00:33:06.847310 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:33:06.847310 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 8 00:33:06.887475 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:33:07.081605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:33:07.081605 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:33:07.085374 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 8 00:33:07.410309 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:33:07.902773 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:33:07.902773 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:33:07.906509 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:33:07.931603 systemd-networkd[771]: eth0: Gained IPv6LL May 8 00:33:07.939230 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:33:07.944719 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:33:07.947402 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:33:07.947402 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:33:07.947402 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:33:07.947402 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:33:07.947402 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:33:07.947402 ignition[947]: INFO : files: files passed May 8 00:33:07.947402 ignition[947]: INFO : Ignition finished successfully May 8 00:33:07.947997 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:33:07.958659 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:33:07.961401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:33:07.962866 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:33:07.962952 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:33:07.969689 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:33:07.972657 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:33:07.972657 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:33:07.976449 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:33:07.975940 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:33:07.978181 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:33:07.989646 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:33:08.019420 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:33:08.019559 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:33:08.021877 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:33:08.023710 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:33:08.025585 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:33:08.026416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:33:08.043653 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:33:08.056661 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:33:08.064444 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:33:08.065691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:33:08.067767 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:33:08.069552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:33:08.069685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:33:08.072345 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:33:08.074396 systemd[1]: Stopped target basic.target - Basic System. May 8 00:33:08.076148 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:33:08.078068 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:33:08.080069 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:33:08.082158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:33:08.083974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:33:08.085901 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:33:08.087943 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:33:08.089765 systemd[1]: Stopped target swap.target - Swaps. May 8 00:33:08.091320 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:33:08.091454 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:33:08.093946 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:33:08.095894 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:33:08.097926 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:33:08.101545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:33:08.102863 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:33:08.102989 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:33:08.105983 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:33:08.106112 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:33:08.108120 systemd[1]: Stopped target paths.target - Path Units. May 8 00:33:08.109660 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:33:08.110571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:33:08.111853 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:33:08.113410 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:33:08.115302 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:33:08.115436 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:33:08.117535 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:33:08.117663 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:33:08.119256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:33:08.119413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:33:08.121261 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:33:08.121412 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:33:08.137741 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:33:08.139626 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:33:08.139839 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:33:08.145744 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:33:08.146616 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:33:08.146820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:33:08.151587 ignition[1002]: INFO : Ignition 2.19.0 May 8 00:33:08.151587 ignition[1002]: INFO : Stage: umount May 8 00:33:08.151587 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:33:08.151587 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:33:08.151587 ignition[1002]: INFO : umount: umount passed May 8 00:33:08.151587 ignition[1002]: INFO : Ignition finished successfully May 8 00:33:08.148713 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:33:08.149464 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:33:08.153224 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:33:08.153338 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:33:08.156928 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:33:08.158006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:33:08.159941 systemd[1]: Stopped target network.target - Network. May 8 00:33:08.161317 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:33:08.161387 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:33:08.163302 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:33:08.163354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:33:08.165305 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:33:08.165356 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:33:08.167445 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:33:08.167517 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:33:08.169513 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:33:08.171550 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:33:08.174544 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:33:08.183554 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:33:08.183644 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:33:08.184580 systemd-networkd[771]: eth0: DHCPv6 lease lost May 8 00:33:08.186501 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:33:08.188513 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:33:08.190806 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:33:08.190922 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:33:08.194225 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:33:08.194288 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:33:08.195698 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:33:08.195750 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:33:08.212621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:33:08.213534 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:33:08.213601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:33:08.215643 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:33:08.215695 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:33:08.217622 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:33:08.217673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:33:08.219969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:33:08.220022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:33:08.222141 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:33:08.231350 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:33:08.231447 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:33:08.233468 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:33:08.233595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:33:08.235851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:33:08.235901 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:33:08.237048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:33:08.237081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:33:08.239053 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:33:08.239096 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:33:08.241773 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:33:08.241816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:33:08.244567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:33:08.244614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:33:08.258608 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:33:08.259616 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:33:08.259670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:33:08.261757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:33:08.261799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:33:08.263981 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:33:08.264088 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:33:08.266148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:33:08.268277 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:33:08.277848 systemd[1]: Switching root. May 8 00:33:08.307706 systemd-journald[237]: Journal stopped May 8 00:33:09.064114 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 8 00:33:09.064172 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:33:09.064185 kernel: SELinux: policy capability open_perms=1 May 8 00:33:09.064195 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:33:09.064208 kernel: SELinux: policy capability always_check_network=0 May 8 00:33:09.064221 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:33:09.064232 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:33:09.064241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:33:09.064251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:33:09.064263 kernel: audit: type=1403 audit(1746664388.456:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:33:09.064274 systemd[1]: Successfully loaded SELinux policy in 33.917ms. May 8 00:33:09.064294 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.234ms. May 8 00:33:09.064306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:33:09.064319 systemd[1]: Detected virtualization kvm. May 8 00:33:09.064330 systemd[1]: Detected architecture arm64. May 8 00:33:09.064340 systemd[1]: Detected first boot. May 8 00:33:09.064351 systemd[1]: Initializing machine ID from VM UUID. May 8 00:33:09.064362 zram_generator::config[1045]: No configuration found. May 8 00:33:09.064377 systemd[1]: Populated /etc with preset unit settings. May 8 00:33:09.064388 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:33:09.064400 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:33:09.064412 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:33:09.064424 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:33:09.064435 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:33:09.064446 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:33:09.064457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:33:09.064467 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:33:09.064498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:33:09.064510 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:33:09.064521 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:33:09.064534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:33:09.064546 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:33:09.064556 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:33:09.064567 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:33:09.064579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:33:09.064590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:33:09.064601 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:33:09.064612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:33:09.064622 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:33:09.064635 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:33:09.064646 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:33:09.064657 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:33:09.064668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:33:09.064679 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:33:09.064690 systemd[1]: Reached target slices.target - Slice Units. May 8 00:33:09.064701 systemd[1]: Reached target swap.target - Swaps. May 8 00:33:09.064712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:33:09.064724 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:33:09.064735 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:33:09.064747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:33:09.064757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:33:09.064768 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:33:09.064780 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:33:09.064790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:33:09.064801 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:33:09.064813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:33:09.064825 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:33:09.064836 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:33:09.064847 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:33:09.064858 systemd[1]: Reached target machines.target - Containers. May 8 00:33:09.064869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:33:09.064880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:33:09.064890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:33:09.064901 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:33:09.064913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:33:09.064924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:33:09.064935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:33:09.064945 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:33:09.064956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:33:09.064967 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:33:09.064978 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:33:09.064988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:33:09.065000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:33:09.065011 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:33:09.065021 kernel: fuse: init (API version 7.39) May 8 00:33:09.065032 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:33:09.065042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:33:09.065053 kernel: loop: module loaded May 8 00:33:09.065063 kernel: ACPI: bus type drm_connector registered May 8 00:33:09.065073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:33:09.065084 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:33:09.065095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:33:09.065114 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:33:09.065124 systemd[1]: Stopped verity-setup.service. May 8 00:33:09.065135 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:33:09.065145 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:33:09.065156 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:33:09.065167 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:33:09.065178 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:33:09.065190 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:33:09.065201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:33:09.065229 systemd-journald[1109]: Collecting audit messages is disabled. May 8 00:33:09.065251 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:33:09.065263 systemd-journald[1109]: Journal started May 8 00:33:09.065287 systemd-journald[1109]: Runtime Journal (/run/log/journal/aba024810bf641b192d7cdbb281c5c0d) is 5.9M, max 47.3M, 41.4M free. May 8 00:33:08.837509 systemd[1]: Queued start job for default target multi-user.target. May 8 00:33:08.857520 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:33:08.857880 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:33:09.067163 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:33:09.070541 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:33:09.071238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:33:09.072550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:33:09.073960 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:33:09.074113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:33:09.075498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:33:09.075637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:33:09.077140 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:33:09.079807 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:33:09.079953 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:33:09.081292 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:33:09.081422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:33:09.082838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:33:09.084314 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:33:09.085844 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:33:09.097747 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:33:09.107576 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:33:09.109630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:33:09.110756 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:33:09.110799 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:33:09.112875 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:33:09.115109 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:33:09.117271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:33:09.118395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:33:09.119920 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:33:09.121880 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:33:09.123174 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:33:09.126652 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:33:09.128172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:33:09.130730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:33:09.133346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:33:09.135685 systemd-journald[1109]: Time spent on flushing to /var/log/journal/aba024810bf641b192d7cdbb281c5c0d is 21.110ms for 853 entries. May 8 00:33:09.135685 systemd-journald[1109]: System Journal (/var/log/journal/aba024810bf641b192d7cdbb281c5c0d) is 8.0M, max 195.6M, 187.6M free. May 8 00:33:09.163364 systemd-journald[1109]: Received client request to flush runtime journal. May 8 00:33:09.163403 kernel: loop0: detected capacity change from 0 to 114328 May 8 00:33:09.137639 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:33:09.140190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:33:09.146044 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:33:09.147727 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:33:09.149863 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:33:09.153788 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:33:09.158908 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:33:09.168764 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:33:09.171222 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:33:09.175460 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:33:09.178703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:33:09.188506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:33:09.194984 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:33:09.199073 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:33:09.199764 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:33:09.207882 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:33:09.224628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:33:09.229500 kernel: loop1: detected capacity change from 0 to 201592 May 8 00:33:09.243231 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:33:09.243250 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:33:09.247520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:33:09.269453 kernel: loop2: detected capacity change from 0 to 114432 May 8 00:33:09.302503 kernel: loop3: detected capacity change from 0 to 114328 May 8 00:33:09.307509 kernel: loop4: detected capacity change from 0 to 201592 May 8 00:33:09.312498 kernel: loop5: detected capacity change from 0 to 114432 May 8 00:33:09.315708 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:33:09.316074 (sd-merge)[1182]: Merged extensions into '/usr'. May 8 00:33:09.322168 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:33:09.322324 systemd[1]: Reloading... May 8 00:33:09.382543 zram_generator::config[1206]: No configuration found. May 8 00:33:09.435860 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:33:09.473459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:33:09.509608 systemd[1]: Reloading finished in 186 ms. May 8 00:33:09.538980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:33:09.540675 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:33:09.558714 systemd[1]: Starting ensure-sysext.service... May 8 00:33:09.561151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:33:09.572949 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... May 8 00:33:09.572964 systemd[1]: Reloading... May 8 00:33:09.585052 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:33:09.585355 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:33:09.586074 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:33:09.586313 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 00:33:09.586371 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 00:33:09.588821 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:33:09.588834 systemd-tmpfiles[1243]: Skipping /boot May 8 00:33:09.595779 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:33:09.595797 systemd-tmpfiles[1243]: Skipping /boot May 8 00:33:09.621173 zram_generator::config[1273]: No configuration found. May 8 00:33:09.705814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:33:09.741420 systemd[1]: Reloading finished in 168 ms. May 8 00:33:09.755237 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:33:09.763954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:33:09.772521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:33:09.775508 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:33:09.777955 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:33:09.783846 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:33:09.790942 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:33:09.795965 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:33:09.801073 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:33:09.806195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:33:09.807781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:33:09.812972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:33:09.817063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:33:09.818334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:33:09.820385 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:33:09.825377 systemd-udevd[1312]: Using default interface naming scheme 'v255'. May 8 00:33:09.826012 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:33:09.828511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:33:09.828670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:33:09.830595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:33:09.830741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:33:09.838712 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:33:09.838843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:33:09.842693 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:33:09.850221 augenrules[1333]: No rules May 8 00:33:09.850134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:33:09.854797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:33:09.857157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:33:09.859915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:33:09.862462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:33:09.863281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:33:09.865418 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:33:09.867125 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:33:09.870787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:33:09.870922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:33:09.872669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:33:09.872777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:33:09.875356 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:33:09.877514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:33:09.882694 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:33:09.887367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:33:09.913624 systemd[1]: Finished ensure-sysext.service. May 8 00:33:09.918783 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:33:09.919457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:33:09.923739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:33:09.926696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:33:09.929524 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1357) May 8 00:33:09.930680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:33:09.934531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:33:09.935894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:33:09.940657 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:33:09.947667 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:33:09.948926 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:33:09.949407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:33:09.949584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:33:09.951162 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:33:09.954691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:33:09.956265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:33:09.956393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:33:09.958282 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:33:09.958416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:33:09.973459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:33:09.973531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:33:09.995601 systemd-resolved[1311]: Positive Trust Anchors: May 8 00:33:09.995620 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:33:09.995652 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:33:10.017360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:33:10.026553 systemd-resolved[1311]: Defaulting to hostname 'linux'. May 8 00:33:10.027124 systemd-networkd[1386]: lo: Link UP May 8 00:33:10.027137 systemd-networkd[1386]: lo: Gained carrier May 8 00:33:10.027799 systemd-networkd[1386]: Enumeration completed May 8 00:33:10.032836 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:33:10.035296 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:33:10.036167 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:33:10.036178 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:33:10.036770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:33:10.037112 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:33:10.037151 systemd-networkd[1386]: eth0: Link UP May 8 00:33:10.037154 systemd-networkd[1386]: eth0: Gained carrier May 8 00:33:10.037162 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:33:10.038257 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:33:10.039914 systemd[1]: Reached target network.target - Network. May 8 00:33:10.040970 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:33:10.042245 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:33:10.044944 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:33:10.047072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:33:10.048705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:33:10.050575 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:33:10.052157 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 8 00:33:10.053904 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:33:10.053957 systemd-timesyncd[1387]: Initial clock synchronization to Thu 2025-05-08 00:33:10.421227 UTC. May 8 00:33:10.062638 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:33:10.078706 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:33:10.089076 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:33:10.093026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:33:10.120575 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:33:10.122107 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:33:10.123307 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:33:10.124592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:33:10.125881 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:33:10.127282 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:33:10.128523 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:33:10.129746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:33:10.130946 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:33:10.130985 systemd[1]: Reached target paths.target - Path Units. May 8 00:33:10.131872 systemd[1]: Reached target timers.target - Timer Units. May 8 00:33:10.133577 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:33:10.135901 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:33:10.144413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:33:10.146697 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:33:10.148267 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:33:10.149463 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:33:10.150429 systemd[1]: Reached target basic.target - Basic System. May 8 00:33:10.151436 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:33:10.151470 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:33:10.152693 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:33:10.154816 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:33:10.156467 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:33:10.157595 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:33:10.159684 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:33:10.160830 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:33:10.163683 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:33:10.166303 jq[1415]: false May 8 00:33:10.167581 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:33:10.171755 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:33:10.174841 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:33:10.180669 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:33:10.186166 extend-filesystems[1416]: Found loop3 May 8 00:33:10.187174 extend-filesystems[1416]: Found loop4 May 8 00:33:10.187174 extend-filesystems[1416]: Found loop5 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda May 8 00:33:10.187174 extend-filesystems[1416]: Found vda1 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda2 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda3 May 8 00:33:10.187174 extend-filesystems[1416]: Found usr May 8 00:33:10.187174 extend-filesystems[1416]: Found vda4 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda6 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda7 May 8 00:33:10.187174 extend-filesystems[1416]: Found vda9 May 8 00:33:10.187174 extend-filesystems[1416]: Checking size of /dev/vda9 May 8 00:33:10.190643 dbus-daemon[1414]: [system] SELinux support is enabled May 8 00:33:10.188862 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:33:10.196124 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:33:10.198212 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:33:10.200111 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:33:10.204173 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:33:10.207005 jq[1434]: true May 8 00:33:10.207139 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:33:10.215080 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:33:10.215323 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:33:10.216243 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:33:10.216787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:33:10.217728 extend-filesystems[1416]: Resized partition /dev/vda9 May 8 00:33:10.220075 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:33:10.221529 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:33:10.227638 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) May 8 00:33:10.231549 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) May 8 00:33:10.233543 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:33:10.241363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:33:10.241398 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:33:10.242802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:33:10.242829 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:33:10.246473 jq[1440]: true May 8 00:33:10.247829 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:33:10.264533 tar[1439]: linux-arm64/LICENSE May 8 00:33:10.264533 tar[1439]: linux-arm64/helm May 8 00:33:10.263670 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:33:10.264073 systemd-logind[1424]: New seat seat0. May 8 00:33:10.268685 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:33:10.269447 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:33:10.275341 update_engine[1433]: I20250508 00:33:10.275002 1433 main.cc:92] Flatcar Update Engine starting May 8 00:33:10.279226 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:33:10.279226 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:33:10.279226 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:33:10.289032 extend-filesystems[1416]: Resized filesystem in /dev/vda9 May 8 00:33:10.290089 update_engine[1433]: I20250508 00:33:10.281287 1433 update_check_scheduler.cc:74] Next update check in 7m50s May 8 00:33:10.280842 systemd[1]: Started update-engine.service - Update Engine. May 8 00:33:10.283655 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:33:10.283845 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:33:10.292456 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:33:10.332788 bash[1468]: Updated "/home/core/.ssh/authorized_keys" May 8 00:33:10.334299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:33:10.336291 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:33:10.339397 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:33:10.457524 containerd[1441]: time="2025-05-08T00:33:10.457357440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:33:10.486521 containerd[1441]: time="2025-05-08T00:33:10.486410040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.487895 containerd[1441]: time="2025-05-08T00:33:10.487835160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:33:10.487895 containerd[1441]: time="2025-05-08T00:33:10.487868640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:33:10.487895 containerd[1441]: time="2025-05-08T00:33:10.487884000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:33:10.488079 containerd[1441]: time="2025-05-08T00:33:10.488047680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:33:10.488079 containerd[1441]: time="2025-05-08T00:33:10.488073760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488152 containerd[1441]: time="2025-05-08T00:33:10.488133760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:33:10.488172 containerd[1441]: time="2025-05-08T00:33:10.488152240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488343 containerd[1441]: time="2025-05-08T00:33:10.488316320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:33:10.488343 containerd[1441]: time="2025-05-08T00:33:10.488337640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488385 containerd[1441]: time="2025-05-08T00:33:10.488351200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:33:10.488385 containerd[1441]: time="2025-05-08T00:33:10.488361440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488443 containerd[1441]: time="2025-05-08T00:33:10.488429360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488672 containerd[1441]: time="2025-05-08T00:33:10.488637240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:33:10.488755 containerd[1441]: time="2025-05-08T00:33:10.488738200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:33:10.488776 containerd[1441]: time="2025-05-08T00:33:10.488757040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:33:10.488840 containerd[1441]: time="2025-05-08T00:33:10.488827960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:33:10.488881 containerd[1441]: time="2025-05-08T00:33:10.488870640Z" level=info msg="metadata content store policy set" policy=shared May 8 00:33:10.493806 containerd[1441]: time="2025-05-08T00:33:10.493776040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:33:10.493871 containerd[1441]: time="2025-05-08T00:33:10.493823680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:33:10.493871 containerd[1441]: time="2025-05-08T00:33:10.493840640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:33:10.493871 containerd[1441]: time="2025-05-08T00:33:10.493855800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:33:10.493871 containerd[1441]: time="2025-05-08T00:33:10.493869560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:33:10.494066 containerd[1441]: time="2025-05-08T00:33:10.494011760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494289400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494439600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494458000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494471320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494510680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494525440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494537520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494551880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494565560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494578000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494590200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494600960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494621440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495037 containerd[1441]: time="2025-05-08T00:33:10.494635200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494647760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494659680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494670720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494683320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494695400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494707680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494721400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494735600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494746480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494758000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494769760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494784840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494812040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494824560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:33:10.495325 containerd[1441]: time="2025-05-08T00:33:10.494839320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:33:10.496442 containerd[1441]: time="2025-05-08T00:33:10.496415440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:33:10.496631 containerd[1441]: time="2025-05-08T00:33:10.496554080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:33:10.496698 containerd[1441]: time="2025-05-08T00:33:10.496684240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:33:10.496813 containerd[1441]: time="2025-05-08T00:33:10.496796680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:33:10.496949 containerd[1441]: time="2025-05-08T00:33:10.496858880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:33:10.496949 containerd[1441]: time="2025-05-08T00:33:10.496878160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:33:10.496949 containerd[1441]: time="2025-05-08T00:33:10.496889840Z" level=info msg="NRI interface is disabled by configuration." May 8 00:33:10.497026 containerd[1441]: time="2025-05-08T00:33:10.496900480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:33:10.498196 containerd[1441]: time="2025-05-08T00:33:10.497584240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:33:10.498196 containerd[1441]: time="2025-05-08T00:33:10.497714920Z" level=info msg="Connect containerd service" May 8 00:33:10.498196 containerd[1441]: time="2025-05-08T00:33:10.497748600Z" level=info msg="using legacy CRI server" May 8 00:33:10.498196 containerd[1441]: time="2025-05-08T00:33:10.497754960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:33:10.498196 containerd[1441]: time="2025-05-08T00:33:10.497850800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.498814600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499059680Z" level=info msg="Start subscribing containerd event" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499117520Z" level=info msg="Start recovering state" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499181800Z" level=info msg="Start event monitor" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499192480Z" level=info msg="Start snapshots syncer" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499206080Z" level=info msg="Start cni network conf syncer for default" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499216000Z" level=info msg="Start streaming server" May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499742400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:33:10.500702 containerd[1441]: time="2025-05-08T00:33:10.499906000Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:33:10.500123 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:33:10.501760 containerd[1441]: time="2025-05-08T00:33:10.501046120Z" level=info msg="containerd successfully booted in 0.045428s" May 8 00:33:10.643741 tar[1439]: linux-arm64/README.md May 8 00:33:10.657031 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:33:11.180086 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:33:11.199592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:33:11.211813 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:33:11.219201 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:33:11.219430 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:33:11.222291 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:33:11.236693 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:33:11.250858 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:33:11.253384 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:33:11.254900 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:33:11.836022 systemd-networkd[1386]: eth0: Gained IPv6LL May 8 00:33:11.838450 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:33:11.840420 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:33:11.852798 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:33:11.855400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:11.857645 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:33:11.874357 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:33:11.874699 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:33:11.876997 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:33:11.880159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:33:12.403691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:12.405323 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:33:12.407885 systemd[1]: Startup finished in 579ms (kernel) + 4.741s (initrd) + 3.987s (userspace) = 9.308s. May 8 00:33:12.410181 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:33:12.852214 kubelet[1526]: E0508 00:33:12.852091 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:33:12.854450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:33:12.854638 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:33:17.040219 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:33:17.041318 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:37702.service - OpenSSH per-connection server daemon (10.0.0.1:37702). May 8 00:33:17.089622 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 37702 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:33:17.091364 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:33:17.112971 systemd-logind[1424]: New session 1 of user core. May 8 00:33:17.114017 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:33:17.124762 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:33:17.134578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:33:17.138784 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:33:17.143772 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:33:17.218792 systemd[1544]: Queued start job for default target default.target. May 8 00:33:17.233461 systemd[1544]: Created slice app.slice - User Application Slice. May 8 00:33:17.233492 systemd[1544]: Reached target paths.target - Paths. May 8 00:33:17.233524 systemd[1544]: Reached target timers.target - Timers. May 8 00:33:17.234820 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:33:17.244310 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:33:17.244372 systemd[1544]: Reached target sockets.target - Sockets. May 8 00:33:17.244385 systemd[1544]: Reached target basic.target - Basic System. May 8 00:33:17.244418 systemd[1544]: Reached target default.target - Main User Target. May 8 00:33:17.244443 systemd[1544]: Startup finished in 95ms. May 8 00:33:17.244807 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:33:17.245980 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:33:17.306135 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:37718.service - OpenSSH per-connection server daemon (10.0.0.1:37718). May 8 00:33:17.341850 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 37718 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:33:17.343112 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:33:17.347461 systemd-logind[1424]: New session 2 of user core. May 8 00:33:17.362724 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:33:17.414761 sshd[1555]: pam_unix(sshd:session): session closed for user core May 8 00:33:17.423825 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:37718.service: Deactivated successfully. May 8 00:33:17.425073 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:33:17.426343 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 8 00:33:17.427464 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:37728.service - OpenSSH per-connection server daemon (10.0.0.1:37728). May 8 00:33:17.428154 systemd-logind[1424]: Removed session 2. May 8 00:33:17.462747 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 37728 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:33:17.463935 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:33:17.467125 systemd-logind[1424]: New session 3 of user core. May 8 00:33:17.479632 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:33:17.527438 sshd[1562]: pam_unix(sshd:session): session closed for user core May 8 00:33:17.540710 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:37728.service: Deactivated successfully. May 8 00:33:17.543639 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:33:17.545671 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 8 00:33:17.555743 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:37736.service - OpenSSH per-connection server daemon (10.0.0.1:37736). May 8 00:33:17.556732 systemd-logind[1424]: Removed session 3. May 8 00:33:17.587494 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 37736 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:33:17.588632 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:33:17.592028 systemd-logind[1424]: New session 4 of user core. May 8 00:33:17.598642 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:33:17.650217 sshd[1569]: pam_unix(sshd:session): session closed for user core May 8 00:33:17.658691 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:37736.service: Deactivated successfully. May 8 00:33:17.660054 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:33:17.662629 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 8 00:33:17.663673 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). May 8 00:33:17.664360 systemd-logind[1424]: Removed session 4. May 8 00:33:17.698315 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:33:17.699460 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:33:17.702952 systemd-logind[1424]: New session 5 of user core. May 8 00:33:17.718650 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:33:17.786133 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:33:17.786403 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:33:18.098848 (dockerd)[1597]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:33:18.099218 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:33:18.357032 dockerd[1597]: time="2025-05-08T00:33:18.355848167Z" level=info msg="Starting up" May 8 00:33:18.502427 dockerd[1597]: time="2025-05-08T00:33:18.502374368Z" level=info msg="Loading containers: start." May 8 00:33:18.596542 kernel: Initializing XFRM netlink socket May 8 00:33:18.659225 systemd-networkd[1386]: docker0: Link UP May 8 00:33:18.687759 dockerd[1597]: time="2025-05-08T00:33:18.687706020Z" level=info msg="Loading containers: done." May 8 00:33:18.700490 dockerd[1597]: time="2025-05-08T00:33:18.700431373Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:33:18.700653 dockerd[1597]: time="2025-05-08T00:33:18.700563596Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:33:18.700681 dockerd[1597]: time="2025-05-08T00:33:18.700669227Z" level=info msg="Daemon has completed initialization" May 8 00:33:18.728648 dockerd[1597]: time="2025-05-08T00:33:18.728428186Z" level=info msg="API listen on /run/docker.sock" May 8 00:33:18.728660 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:33:19.426892 containerd[1441]: time="2025-05-08T00:33:19.426852168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:33:19.483370 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1547772221-merged.mount: Deactivated successfully. May 8 00:33:20.170670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168563350.mount: Deactivated successfully. May 8 00:33:21.731853 containerd[1441]: time="2025-05-08T00:33:21.731801577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:21.733371 containerd[1441]: time="2025-05-08T00:33:21.733275249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 8 00:33:21.734784 containerd[1441]: time="2025-05-08T00:33:21.734301784Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:21.736903 containerd[1441]: time="2025-05-08T00:33:21.736849720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:21.738188 containerd[1441]: time="2025-05-08T00:33:21.737991875Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.311093953s" May 8 00:33:21.738188 containerd[1441]: time="2025-05-08T00:33:21.738024059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 8 00:33:21.738953 containerd[1441]: time="2025-05-08T00:33:21.738933799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:33:23.105463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:33:23.116662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:23.222132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:23.225881 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:33:23.269165 kubelet[1807]: E0508 00:33:23.269108 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:33:23.272215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:33:23.272477 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:33:23.539335 containerd[1441]: time="2025-05-08T00:33:23.538967138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:23.540048 containerd[1441]: time="2025-05-08T00:33:23.540012156Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 8 00:33:23.541251 containerd[1441]: time="2025-05-08T00:33:23.541189951Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:23.544424 containerd[1441]: time="2025-05-08T00:33:23.544384309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:23.545473 containerd[1441]: time="2025-05-08T00:33:23.545430336Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.80639159s" May 8 00:33:23.545473 containerd[1441]: time="2025-05-08T00:33:23.545465943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 8 00:33:23.545900 containerd[1441]: time="2025-05-08T00:33:23.545874568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:33:24.897081 containerd[1441]: time="2025-05-08T00:33:24.897021655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:24.898096 containerd[1441]: time="2025-05-08T00:33:24.897835912Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 8 00:33:24.898906 containerd[1441]: time="2025-05-08T00:33:24.898871951Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:24.901761 containerd[1441]: time="2025-05-08T00:33:24.901724897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:24.903013 containerd[1441]: time="2025-05-08T00:33:24.902979812Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.357076635s" May 8 00:33:24.903060 containerd[1441]: time="2025-05-08T00:33:24.903013442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 8 00:33:24.903637 containerd[1441]: time="2025-05-08T00:33:24.903431197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:33:25.972463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492275643.mount: Deactivated successfully. May 8 00:33:26.208873 containerd[1441]: time="2025-05-08T00:33:26.208818177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:26.209568 containerd[1441]: time="2025-05-08T00:33:26.209529156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 8 00:33:26.210145 containerd[1441]: time="2025-05-08T00:33:26.210113152Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:26.212007 containerd[1441]: time="2025-05-08T00:33:26.211973288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:26.212627 containerd[1441]: time="2025-05-08T00:33:26.212593065Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.309130678s" May 8 00:33:26.212671 containerd[1441]: time="2025-05-08T00:33:26.212629771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 8 00:33:26.213103 containerd[1441]: time="2025-05-08T00:33:26.213062517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:33:26.705478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435957597.mount: Deactivated successfully. May 8 00:33:27.804452 containerd[1441]: time="2025-05-08T00:33:27.804396770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:27.805026 containerd[1441]: time="2025-05-08T00:33:27.804984942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 8 00:33:27.805822 containerd[1441]: time="2025-05-08T00:33:27.805772590Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:27.809352 containerd[1441]: time="2025-05-08T00:33:27.809307494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:27.810502 containerd[1441]: time="2025-05-08T00:33:27.810447723Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.59726926s" May 8 00:33:27.810541 containerd[1441]: time="2025-05-08T00:33:27.810519028Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 8 00:33:27.811541 containerd[1441]: time="2025-05-08T00:33:27.811513712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:33:28.321959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102489709.mount: Deactivated successfully. May 8 00:33:28.325766 containerd[1441]: time="2025-05-08T00:33:28.325723563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:28.326494 containerd[1441]: time="2025-05-08T00:33:28.326438780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 8 00:33:28.327104 containerd[1441]: time="2025-05-08T00:33:28.327080973Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:28.329439 containerd[1441]: time="2025-05-08T00:33:28.329404786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:28.330495 containerd[1441]: time="2025-05-08T00:33:28.330450002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 518.904202ms" May 8 00:33:28.330542 containerd[1441]: time="2025-05-08T00:33:28.330505182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 00:33:28.330945 containerd[1441]: time="2025-05-08T00:33:28.330870628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:33:28.905350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885128273.mount: Deactivated successfully. May 8 00:33:31.476640 containerd[1441]: time="2025-05-08T00:33:31.476570859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:31.478262 containerd[1441]: time="2025-05-08T00:33:31.478227789Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 8 00:33:31.479728 containerd[1441]: time="2025-05-08T00:33:31.479526785Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:31.482301 containerd[1441]: time="2025-05-08T00:33:31.482245800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:31.483684 containerd[1441]: time="2025-05-08T00:33:31.483655105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.152751217s" May 8 00:33:31.483864 containerd[1441]: time="2025-05-08T00:33:31.483757509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 8 00:33:33.524724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:33:33.533414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:33.640005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:33.644648 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:33:33.682807 kubelet[1972]: E0508 00:33:33.682759 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:33:33.684904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:33:33.685038 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:33:37.077242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:37.087700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:37.105569 systemd[1]: Reloading requested from client PID 1989 ('systemctl') (unit session-5.scope)... May 8 00:33:37.105585 systemd[1]: Reloading... May 8 00:33:37.170541 zram_generator::config[2031]: No configuration found. May 8 00:33:37.286577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:33:37.338829 systemd[1]: Reloading finished in 232 ms. May 8 00:33:37.376325 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:37.379820 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:33:37.380005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:37.381369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:37.476727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:37.480869 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:33:37.514686 kubelet[2075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:33:37.514686 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:33:37.514686 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:33:37.515043 kubelet[2075]: I0508 00:33:37.514748 2075 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:33:38.528519 kubelet[2075]: I0508 00:33:38.527680 2075 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:33:38.528519 kubelet[2075]: I0508 00:33:38.527715 2075 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:33:38.528519 kubelet[2075]: I0508 00:33:38.528025 2075 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:33:38.565662 kubelet[2075]: E0508 00:33:38.565617 2075 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:38.566524 kubelet[2075]: I0508 00:33:38.565924 2075 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:33:38.574955 kubelet[2075]: E0508 00:33:38.574912 2075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:33:38.574955 kubelet[2075]: I0508 00:33:38.574948 2075 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:33:38.577707 kubelet[2075]: I0508 00:33:38.577685 2075 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:33:38.578383 kubelet[2075]: I0508 00:33:38.578320 2075 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:33:38.578581 kubelet[2075]: I0508 00:33:38.578374 2075 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:33:38.578660 kubelet[2075]: I0508 00:33:38.578649 2075 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:33:38.578660 kubelet[2075]: I0508 00:33:38.578659 2075 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:33:38.578907 kubelet[2075]: I0508 00:33:38.578881 2075 state_mem.go:36] "Initialized new in-memory state store" May 8 00:33:38.582497 kubelet[2075]: I0508 00:33:38.582453 2075 kubelet.go:446] "Attempting to sync node with API server" May 8 00:33:38.582497 kubelet[2075]: I0508 00:33:38.582499 2075 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:33:38.582571 kubelet[2075]: I0508 00:33:38.582521 2075 kubelet.go:352] "Adding apiserver pod source" May 8 00:33:38.582571 kubelet[2075]: I0508 00:33:38.582538 2075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:33:38.585802 kubelet[2075]: I0508 00:33:38.585676 2075 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:33:38.586495 kubelet[2075]: I0508 00:33:38.586466 2075 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:33:38.586630 kubelet[2075]: W0508 00:33:38.586606 2075 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:33:38.587506 kubelet[2075]: I0508 00:33:38.587434 2075 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:33:38.587506 kubelet[2075]: I0508 00:33:38.587474 2075 server.go:1287] "Started kubelet" May 8 00:33:38.587936 kubelet[2075]: W0508 00:33:38.587891 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:38.588035 kubelet[2075]: E0508 00:33:38.588017 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:38.594512 kubelet[2075]: W0508 00:33:38.594429 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:38.594954 kubelet[2075]: E0508 00:33:38.594518 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:38.594954 kubelet[2075]: I0508 00:33:38.594716 2075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:33:38.595374 kubelet[2075]: I0508 00:33:38.595315 2075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:33:38.595515 kubelet[2075]: I0508 00:33:38.595356 2075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:33:38.595822 kubelet[2075]: I0508 00:33:38.595739 2075 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:33:38.595947 kubelet[2075]: I0508 00:33:38.595931 2075 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:33:38.596412 kubelet[2075]: I0508 00:33:38.596395 2075 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:33:38.596796 kubelet[2075]: I0508 00:33:38.596685 2075 reconciler.go:26] "Reconciler: start to sync state" May 8 00:33:38.597276 kubelet[2075]: E0508 00:33:38.597143 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:33:38.597338 kubelet[2075]: E0508 00:33:38.597282 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" May 8 00:33:38.597819 kubelet[2075]: I0508 00:33:38.597781 2075 factory.go:221] Registration of the systemd container factory successfully May 8 00:33:38.598288 kubelet[2075]: I0508 00:33:38.597891 2075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:33:38.598288 kubelet[2075]: W0508 00:33:38.597987 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:38.598288 kubelet[2075]: E0508 00:33:38.598042 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:38.599015 kubelet[2075]: E0508 00:33:38.598471 2075 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d6608d3b823db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:33:38.587452379 +0000 UTC m=+1.103429833,LastTimestamp:2025-05-08 00:33:38.587452379 +0000 UTC m=+1.103429833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:33:38.599262 kubelet[2075]: I0508 00:33:38.595185 2075 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:33:38.599869 kubelet[2075]: I0508 00:33:38.599840 2075 factory.go:221] Registration of the containerd container factory successfully May 8 00:33:38.600948 kubelet[2075]: E0508 00:33:38.600908 2075 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:33:38.601673 kubelet[2075]: I0508 00:33:38.601640 2075 server.go:490] "Adding debug handlers to kubelet server" May 8 00:33:38.611775 kubelet[2075]: I0508 00:33:38.611751 2075 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:33:38.611775 kubelet[2075]: I0508 00:33:38.611768 2075 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:33:38.611920 kubelet[2075]: I0508 00:33:38.611817 2075 state_mem.go:36] "Initialized new in-memory state store" May 8 00:33:38.680697 kubelet[2075]: I0508 00:33:38.680657 2075 policy_none.go:49] "None policy: Start" May 8 00:33:38.680697 kubelet[2075]: I0508 00:33:38.680696 2075 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:33:38.680697 kubelet[2075]: I0508 00:33:38.680710 2075 state_mem.go:35] "Initializing new in-memory state store" May 8 00:33:38.683743 kubelet[2075]: I0508 00:33:38.683687 2075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:33:38.684937 kubelet[2075]: I0508 00:33:38.684897 2075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:33:38.684937 kubelet[2075]: I0508 00:33:38.684930 2075 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:33:38.685371 kubelet[2075]: I0508 00:33:38.684955 2075 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:33:38.685371 kubelet[2075]: I0508 00:33:38.684962 2075 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:33:38.685371 kubelet[2075]: E0508 00:33:38.685008 2075 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:33:38.685466 kubelet[2075]: W0508 00:33:38.685405 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:38.685466 kubelet[2075]: E0508 00:33:38.685438 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:38.687102 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:33:38.697641 kubelet[2075]: E0508 00:33:38.697606 2075 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:33:38.700170 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:33:38.703074 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:33:38.717649 kubelet[2075]: I0508 00:33:38.717393 2075 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:33:38.717649 kubelet[2075]: I0508 00:33:38.717641 2075 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:33:38.717795 kubelet[2075]: I0508 00:33:38.717654 2075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:33:38.718410 kubelet[2075]: I0508 00:33:38.718293 2075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:33:38.718934 kubelet[2075]: E0508 00:33:38.718911 2075 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:33:38.719030 kubelet[2075]: E0508 00:33:38.718958 2075 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:33:38.792409 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:33:38.798122 kubelet[2075]: E0508 00:33:38.798070 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" May 8 00:33:38.810019 kubelet[2075]: E0508 00:33:38.809971 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:38.813930 systemd[1]: Created slice kubepods-burstable-pod121348b3bca70f36b59e2cc792a3f05d.slice - libcontainer container kubepods-burstable-pod121348b3bca70f36b59e2cc792a3f05d.slice. May 8 00:33:38.815300 kubelet[2075]: E0508 00:33:38.815274 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:38.820278 kubelet[2075]: I0508 00:33:38.820245 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:33:38.820767 kubelet[2075]: E0508 00:33:38.820741 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 8 00:33:38.827201 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:33:38.828704 kubelet[2075]: E0508 00:33:38.828678 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:38.898354 kubelet[2075]: I0508 00:33:38.898109 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:38.898354 kubelet[2075]: I0508 00:33:38.898143 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:38.898354 kubelet[2075]: I0508 00:33:38.898163 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:38.898354 kubelet[2075]: I0508 00:33:38.898186 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:38.898354 kubelet[2075]: I0508 00:33:38.898234 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:33:38.898611 kubelet[2075]: I0508 00:33:38.898268 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:38.898611 kubelet[2075]: I0508 00:33:38.898286 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:38.898611 kubelet[2075]: I0508 00:33:38.898302 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:38.898611 kubelet[2075]: I0508 00:33:38.898319 2075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:39.022155 kubelet[2075]: I0508 00:33:39.022125 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:33:39.022455 kubelet[2075]: E0508 00:33:39.022429 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 8 00:33:39.110707 kubelet[2075]: E0508 00:33:39.110593 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.111435 containerd[1441]: time="2025-05-08T00:33:39.111389210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:33:39.116185 kubelet[2075]: E0508 00:33:39.116140 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.116720 containerd[1441]: time="2025-05-08T00:33:39.116672574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:121348b3bca70f36b59e2cc792a3f05d,Namespace:kube-system,Attempt:0,}" May 8 00:33:39.130093 kubelet[2075]: E0508 00:33:39.130052 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.130584 containerd[1441]: time="2025-05-08T00:33:39.130545469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:33:39.199399 kubelet[2075]: E0508 00:33:39.199350 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" May 8 00:33:39.423818 kubelet[2075]: I0508 00:33:39.423701 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:33:39.424306 kubelet[2075]: E0508 00:33:39.424261 2075 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 8 00:33:39.503291 kubelet[2075]: W0508 00:33:39.503226 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:39.503373 kubelet[2075]: E0508 00:33:39.503296 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:39.527037 kubelet[2075]: W0508 00:33:39.526994 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:39.527037 kubelet[2075]: E0508 00:33:39.527044 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:39.628477 kubelet[2075]: W0508 00:33:39.628390 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:39.628477 kubelet[2075]: E0508 00:33:39.628440 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:39.668133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600243877.mount: Deactivated successfully. May 8 00:33:39.673706 containerd[1441]: time="2025-05-08T00:33:39.673641042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:33:39.674775 containerd[1441]: time="2025-05-08T00:33:39.674371839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:33:39.675934 containerd[1441]: time="2025-05-08T00:33:39.675898505Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:33:39.676537 containerd[1441]: time="2025-05-08T00:33:39.676464442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:33:39.677452 containerd[1441]: time="2025-05-08T00:33:39.677424850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:33:39.678604 containerd[1441]: time="2025-05-08T00:33:39.678544712Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:33:39.679273 containerd[1441]: time="2025-05-08T00:33:39.679126586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:33:39.682470 containerd[1441]: time="2025-05-08T00:33:39.682372968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:33:39.684470 containerd[1441]: time="2025-05-08T00:33:39.683979120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.211282ms" May 8 00:33:39.685006 containerd[1441]: time="2025-05-08T00:33:39.684972524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.340882ms" May 8 00:33:39.687223 containerd[1441]: time="2025-05-08T00:33:39.687183216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.707751ms" May 8 00:33:39.827946 containerd[1441]: time="2025-05-08T00:33:39.827854322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:33:39.827946 containerd[1441]: time="2025-05-08T00:33:39.827914668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:33:39.827946 containerd[1441]: time="2025-05-08T00:33:39.827929724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.828191 containerd[1441]: time="2025-05-08T00:33:39.828011814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.828191 containerd[1441]: time="2025-05-08T00:33:39.828142316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:33:39.828246 containerd[1441]: time="2025-05-08T00:33:39.828194373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:33:39.828246 containerd[1441]: time="2025-05-08T00:33:39.828209990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.828412 containerd[1441]: time="2025-05-08T00:33:39.828283710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.829196 containerd[1441]: time="2025-05-08T00:33:39.829062920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:33:39.829196 containerd[1441]: time="2025-05-08T00:33:39.829135760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:33:39.829196 containerd[1441]: time="2025-05-08T00:33:39.829175123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.830168 containerd[1441]: time="2025-05-08T00:33:39.829744344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:39.850769 systemd[1]: Started cri-containerd-676793f946009763fd5596a2e7cc26b3b528a77956938f40355a447bba279b99.scope - libcontainer container 676793f946009763fd5596a2e7cc26b3b528a77956938f40355a447bba279b99. May 8 00:33:39.851962 systemd[1]: Started cri-containerd-71f05fea5a85e001414d55b83600065e89080e796e9e030c844d5c6d7c84dee7.scope - libcontainer container 71f05fea5a85e001414d55b83600065e89080e796e9e030c844d5c6d7c84dee7. May 8 00:33:39.855054 systemd[1]: Started cri-containerd-a9f58fb01ec0173c7177125703867f0073f554dbdd22840abef3d8dcec209cde.scope - libcontainer container a9f58fb01ec0173c7177125703867f0073f554dbdd22840abef3d8dcec209cde. May 8 00:33:39.888077 containerd[1441]: time="2025-05-08T00:33:39.888008948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"676793f946009763fd5596a2e7cc26b3b528a77956938f40355a447bba279b99\"" May 8 00:33:39.889130 kubelet[2075]: E0508 00:33:39.889101 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.892226 containerd[1441]: time="2025-05-08T00:33:39.892143578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:121348b3bca70f36b59e2cc792a3f05d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f58fb01ec0173c7177125703867f0073f554dbdd22840abef3d8dcec209cde\"" May 8 00:33:39.892856 containerd[1441]: time="2025-05-08T00:33:39.892730058Z" level=info msg="CreateContainer within sandbox \"676793f946009763fd5596a2e7cc26b3b528a77956938f40355a447bba279b99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:33:39.893438 kubelet[2075]: E0508 00:33:39.893405 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.894497 containerd[1441]: time="2025-05-08T00:33:39.894305897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"71f05fea5a85e001414d55b83600065e89080e796e9e030c844d5c6d7c84dee7\"" May 8 00:33:39.894974 kubelet[2075]: E0508 00:33:39.894907 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:39.895835 containerd[1441]: time="2025-05-08T00:33:39.895806735Z" level=info msg="CreateContainer within sandbox \"a9f58fb01ec0173c7177125703867f0073f554dbdd22840abef3d8dcec209cde\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:33:39.896749 containerd[1441]: time="2025-05-08T00:33:39.896712643Z" level=info msg="CreateContainer within sandbox \"71f05fea5a85e001414d55b83600065e89080e796e9e030c844d5c6d7c84dee7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:33:39.910714 containerd[1441]: time="2025-05-08T00:33:39.910657216Z" level=info msg="CreateContainer within sandbox \"676793f946009763fd5596a2e7cc26b3b528a77956938f40355a447bba279b99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c4c2cfc28d0339c5c5e8177331c11423726b17ba64097ce9dd3a187ea7fa065\"" May 8 00:33:39.911461 containerd[1441]: time="2025-05-08T00:33:39.911357700Z" level=info msg="StartContainer for \"8c4c2cfc28d0339c5c5e8177331c11423726b17ba64097ce9dd3a187ea7fa065\"" May 8 00:33:39.915540 containerd[1441]: time="2025-05-08T00:33:39.915498017Z" level=info msg="CreateContainer within sandbox \"a9f58fb01ec0173c7177125703867f0073f554dbdd22840abef3d8dcec209cde\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23c5e034fc9cbc893c4d5beab787aa2880e86a740ccfb2408bea7c50e7206e74\"" May 8 00:33:39.916325 containerd[1441]: time="2025-05-08T00:33:39.916288119Z" level=info msg="StartContainer for \"23c5e034fc9cbc893c4d5beab787aa2880e86a740ccfb2408bea7c50e7206e74\"" May 8 00:33:39.918989 containerd[1441]: time="2025-05-08T00:33:39.918849113Z" level=info msg="CreateContainer within sandbox \"71f05fea5a85e001414d55b83600065e89080e796e9e030c844d5c6d7c84dee7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"117ffa64a9aba04dff5aa8a3e793c938bd168ddafe3cdd0ea9ee319a57329ea8\"" May 8 00:33:39.919564 containerd[1441]: time="2025-05-08T00:33:39.919516001Z" level=info msg="StartContainer for \"117ffa64a9aba04dff5aa8a3e793c938bd168ddafe3cdd0ea9ee319a57329ea8\"" May 8 00:33:39.926849 kubelet[2075]: W0508 00:33:39.926109 2075 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 8 00:33:39.926849 kubelet[2075]: E0508 00:33:39.926180 2075 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" May 8 00:33:39.943728 systemd[1]: Started cri-containerd-8c4c2cfc28d0339c5c5e8177331c11423726b17ba64097ce9dd3a187ea7fa065.scope - libcontainer container 8c4c2cfc28d0339c5c5e8177331c11423726b17ba64097ce9dd3a187ea7fa065. May 8 00:33:39.947830 systemd[1]: Started cri-containerd-117ffa64a9aba04dff5aa8a3e793c938bd168ddafe3cdd0ea9ee319a57329ea8.scope - libcontainer container 117ffa64a9aba04dff5aa8a3e793c938bd168ddafe3cdd0ea9ee319a57329ea8. May 8 00:33:39.949121 systemd[1]: Started cri-containerd-23c5e034fc9cbc893c4d5beab787aa2880e86a740ccfb2408bea7c50e7206e74.scope - libcontainer container 23c5e034fc9cbc893c4d5beab787aa2880e86a740ccfb2408bea7c50e7206e74. May 8 00:33:39.984048 containerd[1441]: time="2025-05-08T00:33:39.983774744Z" level=info msg="StartContainer for \"8c4c2cfc28d0339c5c5e8177331c11423726b17ba64097ce9dd3a187ea7fa065\" returns successfully" May 8 00:33:39.984048 containerd[1441]: time="2025-05-08T00:33:39.983781551Z" level=info msg="StartContainer for \"23c5e034fc9cbc893c4d5beab787aa2880e86a740ccfb2408bea7c50e7206e74\" returns successfully" May 8 00:33:39.997612 containerd[1441]: time="2025-05-08T00:33:39.996305615Z" level=info msg="StartContainer for \"117ffa64a9aba04dff5aa8a3e793c938bd168ddafe3cdd0ea9ee319a57329ea8\" returns successfully" May 8 00:33:40.003618 kubelet[2075]: E0508 00:33:40.000534 2075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" May 8 00:33:40.228732 kubelet[2075]: I0508 00:33:40.228614 2075 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:33:40.693504 kubelet[2075]: E0508 00:33:40.693380 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:40.693804 kubelet[2075]: E0508 00:33:40.693528 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:40.696239 kubelet[2075]: E0508 00:33:40.696212 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:40.696383 kubelet[2075]: E0508 00:33:40.696365 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:40.699548 kubelet[2075]: E0508 00:33:40.699522 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:40.699706 kubelet[2075]: E0508 00:33:40.699688 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:41.703674 kubelet[2075]: E0508 00:33:41.703645 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:41.704031 kubelet[2075]: E0508 00:33:41.703763 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:41.704111 kubelet[2075]: E0508 00:33:41.704087 2075 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:33:41.704206 kubelet[2075]: E0508 00:33:41.704181 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:42.280358 kubelet[2075]: E0508 00:33:42.280314 2075 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:33:42.355430 kubelet[2075]: I0508 00:33:42.355339 2075 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:33:42.396829 kubelet[2075]: I0508 00:33:42.396782 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:33:42.403234 kubelet[2075]: E0508 00:33:42.403192 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:33:42.403234 kubelet[2075]: I0508 00:33:42.403226 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:33:42.405218 kubelet[2075]: E0508 00:33:42.405165 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:33:42.405218 kubelet[2075]: I0508 00:33:42.405206 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:42.406853 kubelet[2075]: E0508 00:33:42.406781 2075 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:42.586208 kubelet[2075]: I0508 00:33:42.586048 2075 apiserver.go:52] "Watching apiserver" May 8 00:33:42.599620 kubelet[2075]: I0508 00:33:42.599569 2075 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:33:43.248515 kubelet[2075]: I0508 00:33:43.248448 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:33:43.254168 kubelet[2075]: E0508 00:33:43.253833 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:43.704457 kubelet[2075]: E0508 00:33:43.704345 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:43.903846 kubelet[2075]: I0508 00:33:43.903817 2075 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:43.908749 kubelet[2075]: E0508 00:33:43.908703 2075 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:44.443901 systemd[1]: Reloading requested from client PID 2353 ('systemctl') (unit session-5.scope)... May 8 00:33:44.444247 systemd[1]: Reloading... May 8 00:33:44.511509 zram_generator::config[2395]: No configuration found. May 8 00:33:44.596354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:33:44.663166 systemd[1]: Reloading finished in 218 ms. May 8 00:33:44.699642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:44.717774 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:33:44.718095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:44.718160 systemd[1]: kubelet.service: Consumed 1.520s CPU time, 124.3M memory peak, 0B memory swap peak. May 8 00:33:44.725837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:33:44.851999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:33:44.872930 (kubelet)[2434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:33:44.929410 kubelet[2434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:33:44.929410 kubelet[2434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:33:44.929410 kubelet[2434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:33:44.930189 kubelet[2434]: I0508 00:33:44.929457 2434 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:33:44.938311 kubelet[2434]: I0508 00:33:44.938229 2434 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:33:44.938311 kubelet[2434]: I0508 00:33:44.938262 2434 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:33:44.938981 kubelet[2434]: I0508 00:33:44.938762 2434 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:33:44.940427 kubelet[2434]: I0508 00:33:44.940392 2434 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:33:44.944007 kubelet[2434]: I0508 00:33:44.943956 2434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:33:44.949996 kubelet[2434]: E0508 00:33:44.949866 2434 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:33:44.949996 kubelet[2434]: I0508 00:33:44.949910 2434 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:33:44.952248 kubelet[2434]: I0508 00:33:44.952222 2434 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:33:44.952455 kubelet[2434]: I0508 00:33:44.952426 2434 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:33:44.952635 kubelet[2434]: I0508 00:33:44.952455 2434 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:33:44.952707 kubelet[2434]: I0508 00:33:44.952647 2434 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:33:44.952707 kubelet[2434]: I0508 00:33:44.952655 2434 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:33:44.952707 kubelet[2434]: I0508 00:33:44.952695 2434 state_mem.go:36] "Initialized new in-memory state store" May 8 00:33:44.952846 kubelet[2434]: I0508 00:33:44.952835 2434 kubelet.go:446] "Attempting to sync node with API server" May 8 00:33:44.952877 kubelet[2434]: I0508 00:33:44.952851 2434 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:33:44.952877 kubelet[2434]: I0508 00:33:44.952869 2434 kubelet.go:352] "Adding apiserver pod source" May 8 00:33:44.952930 kubelet[2434]: I0508 00:33:44.952877 2434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:33:44.954505 kubelet[2434]: I0508 00:33:44.954008 2434 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:33:44.957047 kubelet[2434]: I0508 00:33:44.956833 2434 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:33:44.957362 kubelet[2434]: I0508 00:33:44.957339 2434 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:33:44.957402 kubelet[2434]: I0508 00:33:44.957375 2434 server.go:1287] "Started kubelet" May 8 00:33:44.959625 kubelet[2434]: I0508 00:33:44.957463 2434 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:33:44.959625 kubelet[2434]: I0508 00:33:44.957716 2434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:33:44.959625 kubelet[2434]: I0508 00:33:44.957958 2434 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:33:44.962389 kubelet[2434]: I0508 00:33:44.962358 2434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:33:44.963075 kubelet[2434]: I0508 00:33:44.963045 2434 server.go:490] "Adding debug handlers to kubelet server" May 8 00:33:44.965911 kubelet[2434]: I0508 00:33:44.965018 2434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:33:44.967506 kubelet[2434]: I0508 00:33:44.967457 2434 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:33:44.967590 kubelet[2434]: I0508 00:33:44.967582 2434 factory.go:221] Registration of the systemd container factory successfully May 8 00:33:44.967737 kubelet[2434]: I0508 00:33:44.967699 2434 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:33:44.968092 kubelet[2434]: E0508 00:33:44.968054 2434 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:33:44.968764 kubelet[2434]: I0508 00:33:44.968744 2434 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:33:44.969155 kubelet[2434]: I0508 00:33:44.968886 2434 reconciler.go:26] "Reconciler: start to sync state" May 8 00:33:44.975835 kubelet[2434]: E0508 00:33:44.975794 2434 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:33:44.989134 kubelet[2434]: I0508 00:33:44.986669 2434 factory.go:221] Registration of the containerd container factory successfully May 8 00:33:44.995359 kubelet[2434]: I0508 00:33:44.995322 2434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:33:44.997558 kubelet[2434]: I0508 00:33:44.997503 2434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:33:44.997558 kubelet[2434]: I0508 00:33:44.997538 2434 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:33:44.997558 kubelet[2434]: I0508 00:33:44.997564 2434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:33:44.997859 kubelet[2434]: I0508 00:33:44.997573 2434 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:33:44.997859 kubelet[2434]: E0508 00:33:44.997620 2434 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:33:45.030946 kubelet[2434]: I0508 00:33:45.030912 2434 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:33:45.030946 kubelet[2434]: I0508 00:33:45.030938 2434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:33:45.031106 kubelet[2434]: I0508 00:33:45.030966 2434 state_mem.go:36] "Initialized new in-memory state store" May 8 00:33:45.031166 kubelet[2434]: I0508 00:33:45.031137 2434 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:33:45.031197 kubelet[2434]: I0508 00:33:45.031165 2434 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:33:45.031197 kubelet[2434]: I0508 00:33:45.031186 2434 policy_none.go:49] "None policy: Start" May 8 00:33:45.031197 kubelet[2434]: I0508 00:33:45.031195 2434 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:33:45.031266 kubelet[2434]: I0508 00:33:45.031204 2434 state_mem.go:35] "Initializing new in-memory state store" May 8 00:33:45.031317 kubelet[2434]: I0508 00:33:45.031305 2434 state_mem.go:75] "Updated machine memory state" May 8 00:33:45.040805 kubelet[2434]: I0508 00:33:45.040763 2434 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:33:45.041857 kubelet[2434]: I0508 00:33:45.041378 2434 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:33:45.041857 kubelet[2434]: I0508 00:33:45.041396 2434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:33:45.041857 kubelet[2434]: I0508 00:33:45.041653 2434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:33:45.043285 kubelet[2434]: E0508 00:33:45.042739 2434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:33:45.098689 kubelet[2434]: I0508 00:33:45.098497 2434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.098689 kubelet[2434]: I0508 00:33:45.098551 2434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:33:45.098689 kubelet[2434]: I0508 00:33:45.098690 2434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:33:45.111902 kubelet[2434]: E0508 00:33:45.111867 2434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:33:45.112105 kubelet[2434]: E0508 00:33:45.112074 2434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.146854 kubelet[2434]: I0508 00:33:45.146827 2434 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:33:45.154070 kubelet[2434]: I0508 00:33:45.154019 2434 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:33:45.154188 kubelet[2434]: I0508 00:33:45.154110 2434 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:33:45.170538 kubelet[2434]: I0508 00:33:45.170465 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.170538 kubelet[2434]: I0508 00:33:45.170540 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.170744 kubelet[2434]: I0508 00:33:45.170561 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:45.170744 kubelet[2434]: I0508 00:33:45.170581 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.170744 kubelet[2434]: I0508 00:33:45.170623 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.170744 kubelet[2434]: I0508 00:33:45.170656 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:33:45.170744 kubelet[2434]: I0508 00:33:45.170682 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:33:45.170872 kubelet[2434]: I0508 00:33:45.170699 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:45.170872 kubelet[2434]: I0508 00:33:45.170715 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121348b3bca70f36b59e2cc792a3f05d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"121348b3bca70f36b59e2cc792a3f05d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:33:45.412997 kubelet[2434]: E0508 00:33:45.412842 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:45.413465 kubelet[2434]: E0508 00:33:45.413242 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:45.413992 kubelet[2434]: E0508 00:33:45.413825 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:45.953878 kubelet[2434]: I0508 00:33:45.953610 2434 apiserver.go:52] "Watching apiserver" May 8 00:33:45.969773 kubelet[2434]: I0508 00:33:45.969709 2434 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:33:46.013291 kubelet[2434]: I0508 00:33:46.012592 2434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:46.013291 kubelet[2434]: E0508 00:33:46.012753 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:46.013291 kubelet[2434]: I0508 00:33:46.013030 2434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:33:46.017893 kubelet[2434]: E0508 00:33:46.017847 2434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:33:46.019258 kubelet[2434]: E0508 00:33:46.018844 2434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:33:46.019258 kubelet[2434]: E0508 00:33:46.018949 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:46.019258 kubelet[2434]: E0508 00:33:46.018972 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:46.034088 kubelet[2434]: I0508 00:33:46.033982 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.03396761 podStartE2EDuration="3.03396761s" podCreationTimestamp="2025-05-08 00:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:33:46.033899754 +0000 UTC m=+1.154653638" watchObservedRunningTime="2025-05-08 00:33:46.03396761 +0000 UTC m=+1.154721454" May 8 00:33:46.059164 kubelet[2434]: I0508 00:33:46.059105 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.059086324 podStartE2EDuration="3.059086324s" podCreationTimestamp="2025-05-08 00:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:33:46.045983362 +0000 UTC m=+1.166737286" watchObservedRunningTime="2025-05-08 00:33:46.059086324 +0000 UTC m=+1.179840208" May 8 00:33:46.077803 kubelet[2434]: I0508 00:33:46.077627 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.077610361 podStartE2EDuration="1.077610361s" podCreationTimestamp="2025-05-08 00:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:33:46.059518517 +0000 UTC m=+1.180272361" watchObservedRunningTime="2025-05-08 00:33:46.077610361 +0000 UTC m=+1.198364205" May 8 00:33:46.185699 sudo[1579]: pam_unix(sudo:session): session closed for user root May 8 00:33:46.189347 sshd[1576]: pam_unix(sshd:session): session closed for user core May 8 00:33:46.192587 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:37738.service: Deactivated successfully. May 8 00:33:46.194351 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:33:46.194561 systemd[1]: session-5.scope: Consumed 6.888s CPU time, 155.4M memory peak, 0B memory swap peak. May 8 00:33:46.195005 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 8 00:33:46.195970 systemd-logind[1424]: Removed session 5. May 8 00:33:47.014074 kubelet[2434]: E0508 00:33:47.013722 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:47.014074 kubelet[2434]: E0508 00:33:47.013852 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:47.014708 kubelet[2434]: E0508 00:33:47.014691 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:48.015304 kubelet[2434]: E0508 00:33:48.015275 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:49.557956 kubelet[2434]: I0508 00:33:49.557910 2434 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:33:49.558596 containerd[1441]: time="2025-05-08T00:33:49.558470485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:33:49.558838 kubelet[2434]: I0508 00:33:49.558691 2434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:33:50.548844 systemd[1]: Created slice kubepods-besteffort-pod23c460e4_439a_42a3_8bea_965bbc6f883e.slice - libcontainer container kubepods-besteffort-pod23c460e4_439a_42a3_8bea_965bbc6f883e.slice. May 8 00:33:50.561540 systemd[1]: Created slice kubepods-burstable-pod6c679ff3_fa58_4d2e_a54d_2f9a2076bade.slice - libcontainer container kubepods-burstable-pod6c679ff3_fa58_4d2e_a54d_2f9a2076bade.slice. May 8 00:33:50.609828 kubelet[2434]: I0508 00:33:50.609782 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-flannel-cfg\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.609828 kubelet[2434]: I0508 00:33:50.609824 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjm9\" (UniqueName: \"kubernetes.io/projected/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-kube-api-access-tqjm9\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.610181 kubelet[2434]: I0508 00:33:50.609850 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23c460e4-439a-42a3-8bea-965bbc6f883e-kube-proxy\") pod \"kube-proxy-lz7d2\" (UID: \"23c460e4-439a-42a3-8bea-965bbc6f883e\") " pod="kube-system/kube-proxy-lz7d2" May 8 00:33:50.610181 kubelet[2434]: I0508 00:33:50.609868 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23c460e4-439a-42a3-8bea-965bbc6f883e-lib-modules\") pod \"kube-proxy-lz7d2\" (UID: \"23c460e4-439a-42a3-8bea-965bbc6f883e\") " pod="kube-system/kube-proxy-lz7d2" May 8 00:33:50.610181 kubelet[2434]: I0508 00:33:50.609885 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-cni-plugin\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.610181 kubelet[2434]: I0508 00:33:50.609899 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-cni\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.610181 kubelet[2434]: I0508 00:33:50.609915 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-xtables-lock\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.610310 kubelet[2434]: I0508 00:33:50.609931 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23c460e4-439a-42a3-8bea-965bbc6f883e-xtables-lock\") pod \"kube-proxy-lz7d2\" (UID: \"23c460e4-439a-42a3-8bea-965bbc6f883e\") " pod="kube-system/kube-proxy-lz7d2" May 8 00:33:50.610310 kubelet[2434]: I0508 00:33:50.609945 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6c679ff3-fa58-4d2e-a54d-2f9a2076bade-run\") pod \"kube-flannel-ds-lxllq\" (UID: \"6c679ff3-fa58-4d2e-a54d-2f9a2076bade\") " pod="kube-flannel/kube-flannel-ds-lxllq" May 8 00:33:50.610310 kubelet[2434]: I0508 00:33:50.609961 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp65\" (UniqueName: \"kubernetes.io/projected/23c460e4-439a-42a3-8bea-965bbc6f883e-kube-api-access-kvp65\") pod \"kube-proxy-lz7d2\" (UID: \"23c460e4-439a-42a3-8bea-965bbc6f883e\") " pod="kube-system/kube-proxy-lz7d2" May 8 00:33:50.859296 kubelet[2434]: E0508 00:33:50.859155 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:50.860136 containerd[1441]: time="2025-05-08T00:33:50.860084284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lz7d2,Uid:23c460e4-439a-42a3-8bea-965bbc6f883e,Namespace:kube-system,Attempt:0,}" May 8 00:33:50.863736 kubelet[2434]: E0508 00:33:50.863649 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:50.864433 containerd[1441]: time="2025-05-08T00:33:50.864395115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lxllq,Uid:6c679ff3-fa58-4d2e-a54d-2f9a2076bade,Namespace:kube-flannel,Attempt:0,}" May 8 00:33:50.882640 containerd[1441]: time="2025-05-08T00:33:50.882525022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:33:50.882640 containerd[1441]: time="2025-05-08T00:33:50.882583380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:33:50.882640 containerd[1441]: time="2025-05-08T00:33:50.882596308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:50.882966 containerd[1441]: time="2025-05-08T00:33:50.882682045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:50.887863 kubelet[2434]: E0508 00:33:50.887657 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:50.898826 containerd[1441]: time="2025-05-08T00:33:50.897946189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:33:50.899927 containerd[1441]: time="2025-05-08T00:33:50.899832668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:33:50.901600 containerd[1441]: time="2025-05-08T00:33:50.901346782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:50.901600 containerd[1441]: time="2025-05-08T00:33:50.901525059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:33:50.910678 systemd[1]: Started cri-containerd-106210c16bc24a567e04e77be2a218c0ac8294cda9e670c30ffcac23e966e133.scope - libcontainer container 106210c16bc24a567e04e77be2a218c0ac8294cda9e670c30ffcac23e966e133. May 8 00:33:50.916590 systemd[1]: Started cri-containerd-8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e.scope - libcontainer container 8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e. May 8 00:33:50.934401 containerd[1441]: time="2025-05-08T00:33:50.934323078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lz7d2,Uid:23c460e4-439a-42a3-8bea-965bbc6f883e,Namespace:kube-system,Attempt:0,} returns sandbox id \"106210c16bc24a567e04e77be2a218c0ac8294cda9e670c30ffcac23e966e133\"" May 8 00:33:50.935453 kubelet[2434]: E0508 00:33:50.935426 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:50.938260 containerd[1441]: time="2025-05-08T00:33:50.938208710Z" level=info msg="CreateContainer within sandbox \"106210c16bc24a567e04e77be2a218c0ac8294cda9e670c30ffcac23e966e133\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:33:50.949028 containerd[1441]: time="2025-05-08T00:33:50.948921546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lxllq,Uid:6c679ff3-fa58-4d2e-a54d-2f9a2076bade,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\"" May 8 00:33:50.949710 kubelet[2434]: E0508 00:33:50.949686 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:50.951414 containerd[1441]: time="2025-05-08T00:33:50.951350701Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 8 00:33:50.954698 containerd[1441]: time="2025-05-08T00:33:50.954658513Z" level=info msg="CreateContainer within sandbox \"106210c16bc24a567e04e77be2a218c0ac8294cda9e670c30ffcac23e966e133\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80c99fdadeaf5258a2793b4ada0e887ba5a1311495d69563ded1c6238121e074\"" May 8 00:33:50.955677 containerd[1441]: time="2025-05-08T00:33:50.955262190Z" level=info msg="StartContainer for \"80c99fdadeaf5258a2793b4ada0e887ba5a1311495d69563ded1c6238121e074\"" May 8 00:33:50.982689 systemd[1]: Started cri-containerd-80c99fdadeaf5258a2793b4ada0e887ba5a1311495d69563ded1c6238121e074.scope - libcontainer container 80c99fdadeaf5258a2793b4ada0e887ba5a1311495d69563ded1c6238121e074. May 8 00:33:51.007824 containerd[1441]: time="2025-05-08T00:33:51.007782185Z" level=info msg="StartContainer for \"80c99fdadeaf5258a2793b4ada0e887ba5a1311495d69563ded1c6238121e074\" returns successfully" May 8 00:33:51.022016 kubelet[2434]: E0508 00:33:51.021982 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:51.022277 kubelet[2434]: E0508 00:33:51.022060 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:51.044948 kubelet[2434]: I0508 00:33:51.044720 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lz7d2" podStartSLOduration=1.044702285 podStartE2EDuration="1.044702285s" podCreationTimestamp="2025-05-08 00:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:33:51.035556753 +0000 UTC m=+6.156310597" watchObservedRunningTime="2025-05-08 00:33:51.044702285 +0000 UTC m=+6.165456129" May 8 00:33:52.026500 kubelet[2434]: E0508 00:33:52.025832 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:52.120820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391887402.mount: Deactivated successfully. May 8 00:33:52.146320 containerd[1441]: time="2025-05-08T00:33:52.146261666Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:52.146722 containerd[1441]: time="2025-05-08T00:33:52.146684475Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" May 8 00:33:52.147583 containerd[1441]: time="2025-05-08T00:33:52.147552908Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:52.149960 containerd[1441]: time="2025-05-08T00:33:52.149885085Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:52.150917 containerd[1441]: time="2025-05-08T00:33:52.150881193Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.199474696s" May 8 00:33:52.150917 containerd[1441]: time="2025-05-08T00:33:52.150918175Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 8 00:33:52.153496 containerd[1441]: time="2025-05-08T00:33:52.153436261Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 8 00:33:52.164706 containerd[1441]: time="2025-05-08T00:33:52.164658005Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db\"" May 8 00:33:52.165142 containerd[1441]: time="2025-05-08T00:33:52.165107030Z" level=info msg="StartContainer for \"288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db\"" May 8 00:33:52.201698 systemd[1]: Started cri-containerd-288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db.scope - libcontainer container 288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db. May 8 00:33:52.226109 containerd[1441]: time="2025-05-08T00:33:52.226065333Z" level=info msg="StartContainer for \"288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db\" returns successfully" May 8 00:33:52.230700 systemd[1]: cri-containerd-288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db.scope: Deactivated successfully. May 8 00:33:52.269572 containerd[1441]: time="2025-05-08T00:33:52.269509218Z" level=info msg="shim disconnected" id=288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db namespace=k8s.io May 8 00:33:52.269572 containerd[1441]: time="2025-05-08T00:33:52.269567412Z" level=warning msg="cleaning up after shim disconnected" id=288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db namespace=k8s.io May 8 00:33:52.269572 containerd[1441]: time="2025-05-08T00:33:52.269576377Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:33:52.722009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-288f73a1f9ec1514d6549bfbb3652a5c0e9d15d0407feef0222d80885b2f66db-rootfs.mount: Deactivated successfully. May 8 00:33:53.028378 kubelet[2434]: E0508 00:33:53.028258 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:53.030085 containerd[1441]: time="2025-05-08T00:33:53.029860030Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 8 00:33:54.158110 kubelet[2434]: E0508 00:33:54.158077 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:54.170738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476935576.mount: Deactivated successfully. May 8 00:33:54.700163 containerd[1441]: time="2025-05-08T00:33:54.700109539Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:54.700689 containerd[1441]: time="2025-05-08T00:33:54.700646905Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 8 00:33:54.701515 containerd[1441]: time="2025-05-08T00:33:54.701439886Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:54.704322 containerd[1441]: time="2025-05-08T00:33:54.704295846Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:33:54.706617 containerd[1441]: time="2025-05-08T00:33:54.706560330Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.676656316s" May 8 00:33:54.706617 containerd[1441]: time="2025-05-08T00:33:54.706605994Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 8 00:33:54.708649 containerd[1441]: time="2025-05-08T00:33:54.708596893Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:33:54.718858 containerd[1441]: time="2025-05-08T00:33:54.718817369Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3\"" May 8 00:33:54.719441 containerd[1441]: time="2025-05-08T00:33:54.719415768Z" level=info msg="StartContainer for \"939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3\"" May 8 00:33:54.752670 systemd[1]: Started cri-containerd-939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3.scope - libcontainer container 939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3. May 8 00:33:54.779847 systemd[1]: cri-containerd-939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3.scope: Deactivated successfully. May 8 00:33:54.790828 containerd[1441]: time="2025-05-08T00:33:54.790556007Z" level=info msg="StartContainer for \"939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3\" returns successfully" May 8 00:33:54.870296 containerd[1441]: time="2025-05-08T00:33:54.870235388Z" level=info msg="shim disconnected" id=939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3 namespace=k8s.io May 8 00:33:54.870296 containerd[1441]: time="2025-05-08T00:33:54.870289697Z" level=warning msg="cleaning up after shim disconnected" id=939e88abc74886693a56461ca27acee00005b31f830fd96a4497b966aae58aa3 namespace=k8s.io May 8 00:33:54.870296 containerd[1441]: time="2025-05-08T00:33:54.870300423Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:33:54.879845 kubelet[2434]: I0508 00:33:54.879615 2434 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:33:54.908614 systemd[1]: Created slice kubepods-burstable-pod7585b46e_6e72_4d9f_aa4b_f191e55d145f.slice - libcontainer container kubepods-burstable-pod7585b46e_6e72_4d9f_aa4b_f191e55d145f.slice. May 8 00:33:54.913838 systemd[1]: Created slice kubepods-burstable-pod5b89bdf2_fe5c_4944_9f7b_c3381d780288.slice - libcontainer container kubepods-burstable-pod5b89bdf2_fe5c_4944_9f7b_c3381d780288.slice. May 8 00:33:54.941810 kubelet[2434]: I0508 00:33:54.941770 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7585b46e-6e72-4d9f-aa4b-f191e55d145f-config-volume\") pod \"coredns-668d6bf9bc-g9nk4\" (UID: \"7585b46e-6e72-4d9f-aa4b-f191e55d145f\") " pod="kube-system/coredns-668d6bf9bc-g9nk4" May 8 00:33:54.941810 kubelet[2434]: I0508 00:33:54.941813 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42k5p\" (UniqueName: \"kubernetes.io/projected/7585b46e-6e72-4d9f-aa4b-f191e55d145f-kube-api-access-42k5p\") pod \"coredns-668d6bf9bc-g9nk4\" (UID: \"7585b46e-6e72-4d9f-aa4b-f191e55d145f\") " pod="kube-system/coredns-668d6bf9bc-g9nk4" May 8 00:33:54.942054 kubelet[2434]: I0508 00:33:54.941885 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b89bdf2-fe5c-4944-9f7b-c3381d780288-config-volume\") pod \"coredns-668d6bf9bc-b2rxm\" (UID: \"5b89bdf2-fe5c-4944-9f7b-c3381d780288\") " pod="kube-system/coredns-668d6bf9bc-b2rxm" May 8 00:33:54.942054 kubelet[2434]: I0508 00:33:54.941907 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghzj\" (UniqueName: \"kubernetes.io/projected/5b89bdf2-fe5c-4944-9f7b-c3381d780288-kube-api-access-pghzj\") pod \"coredns-668d6bf9bc-b2rxm\" (UID: \"5b89bdf2-fe5c-4944-9f7b-c3381d780288\") " pod="kube-system/coredns-668d6bf9bc-b2rxm" May 8 00:33:55.034215 kubelet[2434]: E0508 00:33:55.034188 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:55.034342 kubelet[2434]: E0508 00:33:55.034201 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:55.036056 containerd[1441]: time="2025-05-08T00:33:55.036018786Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 8 00:33:55.048827 containerd[1441]: time="2025-05-08T00:33:55.048784038Z" level=info msg="CreateContainer within sandbox \"8b12488b532b0ab9be16c30d1ceba068d4846b98d2f3a47dfb88485e55e08a7e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8da1b1878896cbe8657ee0ee6ed8b705bd6a819b4553d1a1466124e12c3c0c9b\"" May 8 00:33:55.050440 containerd[1441]: time="2025-05-08T00:33:55.049887795Z" level=info msg="StartContainer for \"8da1b1878896cbe8657ee0ee6ed8b705bd6a819b4553d1a1466124e12c3c0c9b\"" May 8 00:33:55.080659 systemd[1]: Started cri-containerd-8da1b1878896cbe8657ee0ee6ed8b705bd6a819b4553d1a1466124e12c3c0c9b.scope - libcontainer container 8da1b1878896cbe8657ee0ee6ed8b705bd6a819b4553d1a1466124e12c3c0c9b. May 8 00:33:55.097719 kubelet[2434]: E0508 00:33:55.097681 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:55.100503 update_engine[1433]: I20250508 00:33:55.100440 1433 update_attempter.cc:509] Updating boot flags... May 8 00:33:55.147778 containerd[1441]: time="2025-05-08T00:33:55.146116950Z" level=info msg="StartContainer for \"8da1b1878896cbe8657ee0ee6ed8b705bd6a819b4553d1a1466124e12c3c0c9b\" returns successfully" May 8 00:33:55.157556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2948) May 8 00:33:55.193377 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2951) May 8 00:33:55.212412 kubelet[2434]: E0508 00:33:55.212384 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:55.212945 containerd[1441]: time="2025-05-08T00:33:55.212895140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g9nk4,Uid:7585b46e-6e72-4d9f-aa4b-f191e55d145f,Namespace:kube-system,Attempt:0,}" May 8 00:33:55.216550 kubelet[2434]: E0508 00:33:55.216526 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:55.216951 containerd[1441]: time="2025-05-08T00:33:55.216921575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b2rxm,Uid:5b89bdf2-fe5c-4944-9f7b-c3381d780288,Namespace:kube-system,Attempt:0,}" May 8 00:33:55.314079 containerd[1441]: time="2025-05-08T00:33:55.313953655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b2rxm,Uid:5b89bdf2-fe5c-4944-9f7b-c3381d780288,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:33:55.314427 kubelet[2434]: E0508 00:33:55.314232 2434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:33:55.314427 kubelet[2434]: E0508 00:33:55.314352 2434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b2rxm" May 8 00:33:55.314427 kubelet[2434]: E0508 00:33:55.314373 2434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-b2rxm" May 8 00:33:55.314575 kubelet[2434]: E0508 00:33:55.314407 2434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b2rxm_kube-system(5b89bdf2-fe5c-4944-9f7b-c3381d780288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b2rxm_kube-system(5b89bdf2-fe5c-4944-9f7b-c3381d780288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-b2rxm" podUID="5b89bdf2-fe5c-4944-9f7b-c3381d780288" May 8 00:33:55.314952 containerd[1441]: time="2025-05-08T00:33:55.314902215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g9nk4,Uid:7585b46e-6e72-4d9f-aa4b-f191e55d145f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:33:55.315571 kubelet[2434]: E0508 00:33:55.315083 2434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:33:55.315571 kubelet[2434]: E0508 00:33:55.315147 2434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-g9nk4" May 8 00:33:55.315571 kubelet[2434]: E0508 00:33:55.315161 2434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-g9nk4" May 8 00:33:55.315571 kubelet[2434]: E0508 00:33:55.315215 2434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g9nk4_kube-system(7585b46e-6e72-4d9f-aa4b-f191e55d145f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g9nk4_kube-system(7585b46e-6e72-4d9f-aa4b-f191e55d145f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-g9nk4" podUID="7585b46e-6e72-4d9f-aa4b-f191e55d145f" May 8 00:33:56.036790 kubelet[2434]: E0508 00:33:56.036760 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:56.037205 kubelet[2434]: E0508 00:33:56.036917 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:56.037205 kubelet[2434]: E0508 00:33:56.037147 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:56.105501 systemd[1]: run-netns-cni\x2d37bf6c06\x2d2279\x2d562e\x2d18bd\x2d2470434e4292.mount: Deactivated successfully. May 8 00:33:56.105586 systemd[1]: run-netns-cni\x2d3496838b\x2dbb6b\x2da7ce\x2dc72b\x2d699324d1e0ef.mount: Deactivated successfully. May 8 00:33:56.105635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bacadf130aae8ad71ebc7f799e4f7accbc35e6aabd1d5c54a93f5d10930e584f-shm.mount: Deactivated successfully. May 8 00:33:56.105684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96109651331098baf97c2871995cc2c8fd64d3da8ccabb25732818c105879861-shm.mount: Deactivated successfully. May 8 00:33:56.246879 systemd-networkd[1386]: flannel.1: Link UP May 8 00:33:56.246885 systemd-networkd[1386]: flannel.1: Gained carrier May 8 00:33:57.040094 kubelet[2434]: E0508 00:33:57.040051 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:33:57.915632 systemd-networkd[1386]: flannel.1: Gained IPv6LL May 8 00:34:06.998904 kubelet[2434]: E0508 00:34:06.998859 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:06.999985 containerd[1441]: time="2025-05-08T00:34:06.999649565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b2rxm,Uid:5b89bdf2-fe5c-4944-9f7b-c3381d780288,Namespace:kube-system,Attempt:0,}" May 8 00:34:07.000995 kubelet[2434]: E0508 00:34:07.000386 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:07.001748 containerd[1441]: time="2025-05-08T00:34:07.000639625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g9nk4,Uid:7585b46e-6e72-4d9f-aa4b-f191e55d145f,Namespace:kube-system,Attempt:0,}" May 8 00:34:07.057877 systemd-networkd[1386]: cni0: Link UP May 8 00:34:07.063127 systemd-networkd[1386]: veth3c6c8f20: Link UP May 8 00:34:07.065069 kernel: cni0: port 1(veth3c6c8f20) entered blocking state May 8 00:34:07.065169 kernel: cni0: port 1(veth3c6c8f20) entered disabled state May 8 00:34:07.065284 kernel: veth3c6c8f20: entered allmulticast mode May 8 00:34:07.067066 kernel: veth3c6c8f20: entered promiscuous mode May 8 00:34:07.067128 kernel: cni0: port 1(veth3c6c8f20) entered blocking state May 8 00:34:07.067144 kernel: cni0: port 1(veth3c6c8f20) entered forwarding state May 8 00:34:07.068633 kernel: cni0: port 1(veth3c6c8f20) entered disabled state May 8 00:34:07.071139 systemd-networkd[1386]: veth03769f5c: Link UP May 8 00:34:07.075063 kernel: cni0: port 2(veth03769f5c) entered blocking state May 8 00:34:07.075126 kernel: cni0: port 2(veth03769f5c) entered disabled state May 8 00:34:07.075146 kernel: veth03769f5c: entered allmulticast mode May 8 00:34:07.075797 kernel: veth03769f5c: entered promiscuous mode May 8 00:34:07.076581 kernel: cni0: port 2(veth03769f5c) entered blocking state May 8 00:34:07.076620 kernel: cni0: port 2(veth03769f5c) entered forwarding state May 8 00:34:07.080419 kernel: cni0: port 2(veth03769f5c) entered disabled state May 8 00:34:07.080454 kernel: cni0: port 1(veth3c6c8f20) entered blocking state May 8 00:34:07.080494 kernel: cni0: port 1(veth3c6c8f20) entered forwarding state May 8 00:34:07.081002 systemd-networkd[1386]: veth3c6c8f20: Gained carrier May 8 00:34:07.081544 systemd-networkd[1386]: cni0: Gained carrier May 8 00:34:07.084282 kernel: cni0: port 2(veth03769f5c) entered blocking state May 8 00:34:07.084328 kernel: cni0: port 2(veth03769f5c) entered forwarding state May 8 00:34:07.084044 systemd-networkd[1386]: veth03769f5c: Gained carrier May 8 00:34:07.085451 containerd[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 8 00:34:07.085451 containerd[1441]: delegateAdd: netconf sent to delegate plugin: May 8 00:34:07.086437 containerd[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 8 00:34:07.086437 containerd[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 8 00:34:07.086437 containerd[1441]: delegateAdd: netconf sent to delegate plugin: May 8 00:34:07.106763 containerd[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-08T00:34:07.106525504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:34:07.106763 containerd[1441]: time="2025-05-08T00:34:07.106598165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:34:07.106763 containerd[1441]: time="2025-05-08T00:34:07.106629214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:34:07.106763 containerd[1441]: time="2025-05-08T00:34:07.106717600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:34:07.107857 containerd[1441]: time="2025-05-08T00:34:07.107794834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:34:07.107857 containerd[1441]: time="2025-05-08T00:34:07.107842647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:34:07.107946 containerd[1441]: time="2025-05-08T00:34:07.107858252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:34:07.108008 containerd[1441]: time="2025-05-08T00:34:07.107961402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:34:07.129728 systemd[1]: Started cri-containerd-663deaff91c2888c03966b5e429485abaa7023f9dcbd7c2c27259e2e86b4fac7.scope - libcontainer container 663deaff91c2888c03966b5e429485abaa7023f9dcbd7c2c27259e2e86b4fac7. May 8 00:34:07.130772 systemd[1]: Started cri-containerd-c2afc96206d42da7a149d02790e197e15ee42e9822a93212a90e9bd1dfc35c24.scope - libcontainer container c2afc96206d42da7a149d02790e197e15ee42e9822a93212a90e9bd1dfc35c24. May 8 00:34:07.140525 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:34:07.142312 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:34:07.159426 containerd[1441]: time="2025-05-08T00:34:07.159376251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b2rxm,Uid:5b89bdf2-fe5c-4944-9f7b-c3381d780288,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2afc96206d42da7a149d02790e197e15ee42e9822a93212a90e9bd1dfc35c24\"" May 8 00:34:07.160190 containerd[1441]: time="2025-05-08T00:34:07.159991150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g9nk4,Uid:7585b46e-6e72-4d9f-aa4b-f191e55d145f,Namespace:kube-system,Attempt:0,} returns sandbox id \"663deaff91c2888c03966b5e429485abaa7023f9dcbd7c2c27259e2e86b4fac7\"" May 8 00:34:07.160534 kubelet[2434]: E0508 00:34:07.160510 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:07.161268 kubelet[2434]: E0508 00:34:07.160816 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:07.163951 containerd[1441]: time="2025-05-08T00:34:07.163905450Z" level=info msg="CreateContainer within sandbox \"663deaff91c2888c03966b5e429485abaa7023f9dcbd7c2c27259e2e86b4fac7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:34:07.167414 containerd[1441]: time="2025-05-08T00:34:07.167258906Z" level=info msg="CreateContainer within sandbox \"c2afc96206d42da7a149d02790e197e15ee42e9822a93212a90e9bd1dfc35c24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:34:07.185531 containerd[1441]: time="2025-05-08T00:34:07.185467288Z" level=info msg="CreateContainer within sandbox \"c2afc96206d42da7a149d02790e197e15ee42e9822a93212a90e9bd1dfc35c24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72eb3a8b9930f82e3cc8a70306e32a90739955a1d4e1f692ebc4ec8b49ef07f0\"" May 8 00:34:07.186517 containerd[1441]: time="2025-05-08T00:34:07.186288927Z" level=info msg="StartContainer for \"72eb3a8b9930f82e3cc8a70306e32a90739955a1d4e1f692ebc4ec8b49ef07f0\"" May 8 00:34:07.186517 containerd[1441]: time="2025-05-08T00:34:07.186454455Z" level=info msg="CreateContainer within sandbox \"663deaff91c2888c03966b5e429485abaa7023f9dcbd7c2c27259e2e86b4fac7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93ae1d4866f5ac07032a4739fc7fb987d98736d875bd778ade815b8f982782e8\"" May 8 00:34:07.192245 containerd[1441]: time="2025-05-08T00:34:07.187078677Z" level=info msg="StartContainer for \"93ae1d4866f5ac07032a4739fc7fb987d98736d875bd778ade815b8f982782e8\"" May 8 00:34:07.213631 systemd[1]: Started cri-containerd-93ae1d4866f5ac07032a4739fc7fb987d98736d875bd778ade815b8f982782e8.scope - libcontainer container 93ae1d4866f5ac07032a4739fc7fb987d98736d875bd778ade815b8f982782e8. May 8 00:34:07.217108 systemd[1]: Started cri-containerd-72eb3a8b9930f82e3cc8a70306e32a90739955a1d4e1f692ebc4ec8b49ef07f0.scope - libcontainer container 72eb3a8b9930f82e3cc8a70306e32a90739955a1d4e1f692ebc4ec8b49ef07f0. May 8 00:34:07.236806 containerd[1441]: time="2025-05-08T00:34:07.236708366Z" level=info msg="StartContainer for \"93ae1d4866f5ac07032a4739fc7fb987d98736d875bd778ade815b8f982782e8\" returns successfully" May 8 00:34:07.255865 containerd[1441]: time="2025-05-08T00:34:07.255769676Z" level=info msg="StartContainer for \"72eb3a8b9930f82e3cc8a70306e32a90739955a1d4e1f692ebc4ec8b49ef07f0\" returns successfully" May 8 00:34:08.094220 kubelet[2434]: E0508 00:34:08.093908 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:08.097418 kubelet[2434]: E0508 00:34:08.097245 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:08.105423 kubelet[2434]: I0508 00:34:08.105358 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lxllq" podStartSLOduration=14.348652376 podStartE2EDuration="18.105052822s" podCreationTimestamp="2025-05-08 00:33:50 +0000 UTC" firstStartedPulling="2025-05-08 00:33:50.950839045 +0000 UTC m=+6.071592889" lastFinishedPulling="2025-05-08 00:33:54.707239531 +0000 UTC m=+9.827993335" observedRunningTime="2025-05-08 00:33:56.05558977 +0000 UTC m=+11.176343614" watchObservedRunningTime="2025-05-08 00:34:08.105052822 +0000 UTC m=+23.225806666" May 8 00:34:08.105914 kubelet[2434]: I0508 00:34:08.105476 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g9nk4" podStartSLOduration=18.105472459 podStartE2EDuration="18.105472459s" podCreationTimestamp="2025-05-08 00:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:34:08.105453294 +0000 UTC m=+23.226207138" watchObservedRunningTime="2025-05-08 00:34:08.105472459 +0000 UTC m=+23.226226303" May 8 00:34:08.132274 kubelet[2434]: I0508 00:34:08.132207 2434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b2rxm" podStartSLOduration=18.132194333 podStartE2EDuration="18.132194333s" podCreationTimestamp="2025-05-08 00:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:34:08.117348501 +0000 UTC m=+23.238102345" watchObservedRunningTime="2025-05-08 00:34:08.132194333 +0000 UTC m=+23.252948177" May 8 00:34:08.283627 systemd-networkd[1386]: veth03769f5c: Gained IPv6LL May 8 00:34:08.539647 systemd-networkd[1386]: veth3c6c8f20: Gained IPv6LL May 8 00:34:09.051663 systemd-networkd[1386]: cni0: Gained IPv6LL May 8 00:34:09.098856 kubelet[2434]: E0508 00:34:09.098817 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:09.099179 kubelet[2434]: E0508 00:34:09.098953 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:10.100979 kubelet[2434]: E0508 00:34:10.100625 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:10.100979 kubelet[2434]: E0508 00:34:10.100756 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:34:12.302809 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:43746.service - OpenSSH per-connection server daemon (10.0.0.1:43746). May 8 00:34:12.353639 sshd[3408]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:12.355433 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:12.359188 systemd-logind[1424]: New session 6 of user core. May 8 00:34:12.369698 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:34:12.497278 sshd[3408]: pam_unix(sshd:session): session closed for user core May 8 00:34:12.501398 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 8 00:34:12.501722 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:43746.service: Deactivated successfully. May 8 00:34:12.503552 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:34:12.504440 systemd-logind[1424]: Removed session 6. May 8 00:34:17.509272 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). May 8 00:34:17.557956 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:17.559341 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:17.564213 systemd-logind[1424]: New session 7 of user core. May 8 00:34:17.573688 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:34:17.687229 sshd[3447]: pam_unix(sshd:session): session closed for user core May 8 00:34:17.691533 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:32776.service: Deactivated successfully. May 8 00:34:17.694003 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:34:17.695185 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 8 00:34:17.696904 systemd-logind[1424]: Removed session 7. May 8 00:34:22.701678 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:35882.service - OpenSSH per-connection server daemon (10.0.0.1:35882). May 8 00:34:22.740080 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 35882 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:22.741469 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:22.745167 systemd-logind[1424]: New session 8 of user core. May 8 00:34:22.756683 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:34:22.874500 sshd[3485]: pam_unix(sshd:session): session closed for user core May 8 00:34:22.883085 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:35882.service: Deactivated successfully. May 8 00:34:22.884649 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:34:22.885906 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. May 8 00:34:22.893942 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:35886.service - OpenSSH per-connection server daemon (10.0.0.1:35886). May 8 00:34:22.895110 systemd-logind[1424]: Removed session 8. May 8 00:34:22.926534 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 35886 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:22.927843 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:22.931786 systemd-logind[1424]: New session 9 of user core. May 8 00:34:22.937632 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:34:23.088385 sshd[3500]: pam_unix(sshd:session): session closed for user core May 8 00:34:23.102377 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:35886.service: Deactivated successfully. May 8 00:34:23.108875 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:34:23.112588 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. May 8 00:34:23.119888 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:35900.service - OpenSSH per-connection server daemon (10.0.0.1:35900). May 8 00:34:23.121200 systemd-logind[1424]: Removed session 9. May 8 00:34:23.165931 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 35900 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:23.167172 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:23.171332 systemd-logind[1424]: New session 10 of user core. May 8 00:34:23.187688 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:34:23.300094 sshd[3513]: pam_unix(sshd:session): session closed for user core May 8 00:34:23.304814 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:35900.service: Deactivated successfully. May 8 00:34:23.306747 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:34:23.308069 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. May 8 00:34:23.308817 systemd-logind[1424]: Removed session 10. May 8 00:34:28.311534 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:35916.service - OpenSSH per-connection server daemon (10.0.0.1:35916). May 8 00:34:28.349097 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 35916 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:28.350456 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:28.355293 systemd-logind[1424]: New session 11 of user core. May 8 00:34:28.364705 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:34:28.491035 sshd[3550]: pam_unix(sshd:session): session closed for user core May 8 00:34:28.501113 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:35916.service: Deactivated successfully. May 8 00:34:28.503769 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:34:28.505398 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. May 8 00:34:28.514186 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:35926.service - OpenSSH per-connection server daemon (10.0.0.1:35926). May 8 00:34:28.518929 systemd-logind[1424]: Removed session 11. May 8 00:34:28.548746 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 35926 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:28.550166 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:28.555665 systemd-logind[1424]: New session 12 of user core. May 8 00:34:28.561661 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:34:28.791013 sshd[3564]: pam_unix(sshd:session): session closed for user core May 8 00:34:28.798140 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:35926.service: Deactivated successfully. May 8 00:34:28.800095 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:34:28.802068 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. May 8 00:34:28.810836 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:35938.service - OpenSSH per-connection server daemon (10.0.0.1:35938). May 8 00:34:28.812584 systemd-logind[1424]: Removed session 12. May 8 00:34:28.850628 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:28.851133 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:28.855580 systemd-logind[1424]: New session 13 of user core. May 8 00:34:28.865672 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:34:29.727422 sshd[3577]: pam_unix(sshd:session): session closed for user core May 8 00:34:29.735447 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:35938.service: Deactivated successfully. May 8 00:34:29.738109 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:34:29.744228 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. May 8 00:34:29.749878 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:35948.service - OpenSSH per-connection server daemon (10.0.0.1:35948). May 8 00:34:29.752700 systemd-logind[1424]: Removed session 13. May 8 00:34:29.800358 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 35948 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:29.801841 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:29.805970 systemd-logind[1424]: New session 14 of user core. May 8 00:34:29.815681 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:34:30.041683 sshd[3597]: pam_unix(sshd:session): session closed for user core May 8 00:34:30.052536 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:35948.service: Deactivated successfully. May 8 00:34:30.054635 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:34:30.059689 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. May 8 00:34:30.068840 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). May 8 00:34:30.070684 systemd-logind[1424]: Removed session 14. May 8 00:34:30.102321 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:30.103819 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:30.109567 systemd-logind[1424]: New session 15 of user core. May 8 00:34:30.120684 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:34:30.231007 sshd[3609]: pam_unix(sshd:session): session closed for user core May 8 00:34:30.234603 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:35960.service: Deactivated successfully. May 8 00:34:30.238360 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:34:30.239518 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. May 8 00:34:30.240382 systemd-logind[1424]: Removed session 15. May 8 00:34:35.244912 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:52894.service - OpenSSH per-connection server daemon (10.0.0.1:52894). May 8 00:34:35.286604 sshd[3648]: Accepted publickey for core from 10.0.0.1 port 52894 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:35.289025 sshd[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:35.293268 systemd-logind[1424]: New session 16 of user core. May 8 00:34:35.299700 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:34:35.428867 sshd[3648]: pam_unix(sshd:session): session closed for user core May 8 00:34:35.434450 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. May 8 00:34:35.435041 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:52894.service: Deactivated successfully. May 8 00:34:35.438814 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:34:35.440133 systemd-logind[1424]: Removed session 16. May 8 00:34:40.460475 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:52902.service - OpenSSH per-connection server daemon (10.0.0.1:52902). May 8 00:34:40.494718 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 52902 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:40.495855 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:40.506745 systemd-logind[1424]: New session 17 of user core. May 8 00:34:40.521657 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:34:40.642237 sshd[3683]: pam_unix(sshd:session): session closed for user core May 8 00:34:40.646277 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. May 8 00:34:40.646448 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:52902.service: Deactivated successfully. May 8 00:34:40.648606 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:34:40.650725 systemd-logind[1424]: Removed session 17. May 8 00:34:45.656927 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:45248.service - OpenSSH per-connection server daemon (10.0.0.1:45248). May 8 00:34:45.695318 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 45248 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:34:45.696462 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:34:45.700425 systemd-logind[1424]: New session 18 of user core. May 8 00:34:45.710627 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:34:45.826679 sshd[3721]: pam_unix(sshd:session): session closed for user core May 8 00:34:45.830857 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:45248.service: Deactivated successfully. May 8 00:34:45.832462 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:34:45.834667 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. May 8 00:34:45.835415 systemd-logind[1424]: Removed session 18.