May 8 00:39:03.927754 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:39:03.927776 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:39:03.927785 kernel: KASLR enabled May 8 00:39:03.927791 kernel: efi: EFI v2.7 by EDK II May 8 00:39:03.927797 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:39:03.927803 kernel: random: crng init done May 8 00:39:03.927810 kernel: ACPI: Early table checksum verification disabled May 8 00:39:03.927816 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:39:03.927822 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:39:03.927831 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927837 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927843 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927853 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927860 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927867 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927877 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927883 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927890 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:03.927896 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:39:03.927902 kernel: NUMA: Failed to initialise from firmware May 8 00:39:03.927909 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:39:03.927915 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:39:03.927921 kernel: Zone ranges: May 8 00:39:03.927927 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:39:03.927933 kernel: DMA32 empty May 8 00:39:03.927941 kernel: Normal empty May 8 00:39:03.927947 kernel: Movable zone start for each node May 8 00:39:03.927953 kernel: Early memory node ranges May 8 00:39:03.927959 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:39:03.927966 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:39:03.927972 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:39:03.927978 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:39:03.927984 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:39:03.927990 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:39:03.927997 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:39:03.928003 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:39:03.928009 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:39:03.928017 kernel: psci: probing for conduit method from ACPI. May 8 00:39:03.928023 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:39:03.928029 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:39:03.928038 kernel: psci: Trusted OS migration not required May 8 00:39:03.928045 kernel: psci: SMC Calling Convention v1.1 May 8 00:39:03.928052 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:39:03.928061 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:39:03.928067 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:39:03.928074 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:39:03.928081 kernel: Detected PIPT I-cache on CPU0 May 8 00:39:03.928087 kernel: CPU features: detected: GIC system register CPU interface May 8 00:39:03.928100 kernel: CPU features: detected: Hardware dirty bit management May 8 00:39:03.928107 kernel: CPU features: detected: Spectre-v4 May 8 00:39:03.928113 kernel: CPU features: detected: Spectre-BHB May 8 00:39:03.928120 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:39:03.928127 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:39:03.928135 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:39:03.928141 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:39:03.928148 kernel: alternatives: applying boot alternatives May 8 00:39:03.928156 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:39:03.928163 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:39:03.928170 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:39:03.928176 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:39:03.928183 kernel: Fallback order for Node 0: 0 May 8 00:39:03.928190 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:39:03.928196 kernel: Policy zone: DMA May 8 00:39:03.928203 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:39:03.928211 kernel: software IO TLB: area num 4. May 8 00:39:03.928218 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:39:03.928225 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:39:03.928232 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:39:03.928238 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:39:03.928246 kernel: rcu: RCU event tracing is enabled. May 8 00:39:03.928253 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:39:03.928260 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:39:03.928266 kernel: Tracing variant of Tasks RCU enabled. May 8 00:39:03.928273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:39:03.928280 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:39:03.928286 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:39:03.928294 kernel: GICv3: 256 SPIs implemented May 8 00:39:03.928301 kernel: GICv3: 0 Extended SPIs implemented May 8 00:39:03.928308 kernel: Root IRQ handler: gic_handle_irq May 8 00:39:03.928314 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:39:03.928328 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:39:03.928335 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:39:03.928342 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:39:03.928349 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:39:03.928356 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:39:03.928363 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:39:03.928370 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:39:03.928392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:39:03.928399 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:39:03.928406 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:39:03.928414 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:39:03.928421 kernel: arm-pv: using stolen time PV May 8 00:39:03.928428 kernel: Console: colour dummy device 80x25 May 8 00:39:03.928435 kernel: ACPI: Core revision 20230628 May 8 00:39:03.928442 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:39:03.928449 kernel: pid_max: default: 32768 minimum: 301 May 8 00:39:03.928456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:39:03.928464 kernel: landlock: Up and running. May 8 00:39:03.928471 kernel: SELinux: Initializing. May 8 00:39:03.928478 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:03.928485 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:03.928492 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:39:03.928500 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:39:03.928507 kernel: rcu: Hierarchical SRCU implementation. May 8 00:39:03.928514 kernel: rcu: Max phase no-delay instances is 400. May 8 00:39:03.928521 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:39:03.928530 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:39:03.928537 kernel: Remapping and enabling EFI services. May 8 00:39:03.928543 kernel: smp: Bringing up secondary CPUs ... May 8 00:39:03.928550 kernel: Detected PIPT I-cache on CPU1 May 8 00:39:03.928557 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:39:03.928564 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:39:03.928571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:39:03.928578 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:39:03.928585 kernel: Detected PIPT I-cache on CPU2 May 8 00:39:03.928592 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:39:03.928600 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:39:03.928607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:39:03.928619 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:39:03.928627 kernel: Detected PIPT I-cache on CPU3 May 8 00:39:03.928634 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:39:03.928642 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:39:03.928649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:39:03.928656 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:39:03.928663 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:39:03.928672 kernel: SMP: Total of 4 processors activated. May 8 00:39:03.928679 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:39:03.928686 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:39:03.928694 kernel: CPU features: detected: Common not Private translations May 8 00:39:03.928701 kernel: CPU features: detected: CRC32 instructions May 8 00:39:03.928708 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:39:03.928715 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:39:03.928722 kernel: CPU features: detected: LSE atomic instructions May 8 00:39:03.928731 kernel: CPU features: detected: Privileged Access Never May 8 00:39:03.928738 kernel: CPU features: detected: RAS Extension Support May 8 00:39:03.928745 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:39:03.928752 kernel: CPU: All CPU(s) started at EL1 May 8 00:39:03.928760 kernel: alternatives: applying system-wide alternatives May 8 00:39:03.928767 kernel: devtmpfs: initialized May 8 00:39:03.928774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:39:03.928781 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:39:03.928788 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:39:03.928797 kernel: SMBIOS 3.0.0 present. May 8 00:39:03.928804 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:39:03.928812 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:39:03.928819 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:39:03.928826 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:39:03.928833 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:39:03.928840 kernel: audit: initializing netlink subsys (disabled) May 8 00:39:03.928850 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 00:39:03.928861 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:39:03.928870 kernel: cpuidle: using governor menu May 8 00:39:03.928877 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:39:03.928884 kernel: ASID allocator initialised with 32768 entries May 8 00:39:03.928892 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:39:03.928900 kernel: Serial: AMBA PL011 UART driver May 8 00:39:03.928907 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:39:03.928914 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:39:03.928922 kernel: Modules: 509024 pages in range for PLT usage May 8 00:39:03.928929 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:39:03.928940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:39:03.928947 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:39:03.928955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:39:03.928962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:39:03.928970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:39:03.928994 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:39:03.929004 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:39:03.929013 kernel: ACPI: Added _OSI(Module Device) May 8 00:39:03.929023 kernel: ACPI: Added _OSI(Processor Device) May 8 00:39:03.929032 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:39:03.929039 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:39:03.929046 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:39:03.929054 kernel: ACPI: Interpreter enabled May 8 00:39:03.929061 kernel: ACPI: Using GIC for interrupt routing May 8 00:39:03.929068 kernel: ACPI: MCFG table detected, 1 entries May 8 00:39:03.929075 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:39:03.929082 kernel: printk: console [ttyAMA0] enabled May 8 00:39:03.929094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:39:03.929234 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:39:03.929311 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:39:03.929397 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:39:03.929464 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:39:03.929531 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:39:03.929541 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:39:03.929548 kernel: PCI host bridge to bus 0000:00 May 8 00:39:03.929626 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:39:03.929688 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:39:03.929750 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:39:03.929812 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:39:03.929897 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:39:03.929974 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:39:03.930046 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:39:03.930125 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:39:03.930199 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:39:03.930269 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:39:03.930350 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:39:03.930422 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:39:03.930484 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:39:03.930549 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:39:03.930609 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:39:03.930619 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:39:03.930626 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:39:03.930634 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:39:03.930641 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:39:03.930649 kernel: iommu: Default domain type: Translated May 8 00:39:03.930656 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:39:03.930665 kernel: efivars: Registered efivars operations May 8 00:39:03.930672 kernel: vgaarb: loaded May 8 00:39:03.930680 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:39:03.930687 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:39:03.930695 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:39:03.930702 kernel: pnp: PnP ACPI init May 8 00:39:03.930783 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:39:03.930794 kernel: pnp: PnP ACPI: found 1 devices May 8 00:39:03.930802 kernel: NET: Registered PF_INET protocol family May 8 00:39:03.930811 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:39:03.930819 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:39:03.930826 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:39:03.930834 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:39:03.930841 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:39:03.930849 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:39:03.930856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:03.930864 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:03.930871 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:39:03.930880 kernel: PCI: CLS 0 bytes, default 64 May 8 00:39:03.930887 kernel: kvm [1]: HYP mode not available May 8 00:39:03.930895 kernel: Initialise system trusted keyrings May 8 00:39:03.930903 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:39:03.930910 kernel: Key type asymmetric registered May 8 00:39:03.930917 kernel: Asymmetric key parser 'x509' registered May 8 00:39:03.930924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:39:03.930932 kernel: io scheduler mq-deadline registered May 8 00:39:03.930939 kernel: io scheduler kyber registered May 8 00:39:03.930948 kernel: io scheduler bfq registered May 8 00:39:03.930955 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:39:03.930963 kernel: ACPI: button: Power Button [PWRB] May 8 00:39:03.930970 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:39:03.931035 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:39:03.931045 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:39:03.931052 kernel: thunder_xcv, ver 1.0 May 8 00:39:03.931060 kernel: thunder_bgx, ver 1.0 May 8 00:39:03.931067 kernel: nicpf, ver 1.0 May 8 00:39:03.931076 kernel: nicvf, ver 1.0 May 8 00:39:03.931160 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:39:03.931227 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:39:03 UTC (1746664743) May 8 00:39:03.931237 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:39:03.931244 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:39:03.931252 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:39:03.931259 kernel: watchdog: Hard watchdog permanently disabled May 8 00:39:03.931266 kernel: NET: Registered PF_INET6 protocol family May 8 00:39:03.931276 kernel: Segment Routing with IPv6 May 8 00:39:03.931284 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:39:03.931291 kernel: NET: Registered PF_PACKET protocol family May 8 00:39:03.931298 kernel: Key type dns_resolver registered May 8 00:39:03.931305 kernel: registered taskstats version 1 May 8 00:39:03.931312 kernel: Loading compiled-in X.509 certificates May 8 00:39:03.931333 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:39:03.931342 kernel: Key type .fscrypt registered May 8 00:39:03.931349 kernel: Key type fscrypt-provisioning registered May 8 00:39:03.931358 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:39:03.931365 kernel: ima: Allocated hash algorithm: sha1 May 8 00:39:03.931372 kernel: ima: No architecture policies found May 8 00:39:03.931380 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:39:03.931387 kernel: clk: Disabling unused clocks May 8 00:39:03.931394 kernel: Freeing unused kernel memory: 39424K May 8 00:39:03.931401 kernel: Run /init as init process May 8 00:39:03.931408 kernel: with arguments: May 8 00:39:03.931415 kernel: /init May 8 00:39:03.931424 kernel: with environment: May 8 00:39:03.931431 kernel: HOME=/ May 8 00:39:03.931438 kernel: TERM=linux May 8 00:39:03.931445 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:39:03.931455 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:39:03.931464 systemd[1]: Detected virtualization kvm. May 8 00:39:03.931472 systemd[1]: Detected architecture arm64. May 8 00:39:03.931481 systemd[1]: Running in initrd. May 8 00:39:03.931489 systemd[1]: No hostname configured, using default hostname. May 8 00:39:03.931496 systemd[1]: Hostname set to . May 8 00:39:03.931504 systemd[1]: Initializing machine ID from VM UUID. May 8 00:39:03.931512 systemd[1]: Queued start job for default target initrd.target. May 8 00:39:03.931520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:03.931528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:03.931536 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:39:03.931545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:03.931553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:39:03.931561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:39:03.931570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:39:03.931578 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:39:03.931586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:03.931594 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:03.931603 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:03.931611 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:03.931619 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:03.931626 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:03.931634 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:03.931642 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:03.931650 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:39:03.931658 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:39:03.931666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:03.931676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:03.931683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:03.931691 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:03.931699 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:39:03.931707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:03.931715 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:39:03.931723 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:39:03.931730 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:03.931739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:03.931748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:03.931755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:39:03.931763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:03.931771 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:39:03.931798 systemd-journald[239]: Collecting audit messages is disabled. May 8 00:39:03.931820 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:03.931828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:03.931836 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:39:03.931846 systemd-journald[239]: Journal started May 8 00:39:03.931864 systemd-journald[239]: Runtime Journal (/run/log/journal/b4ac6b4ee55641e2aef5155eb1e9b2ca) is 5.9M, max 47.3M, 41.4M free. May 8 00:39:03.918231 systemd-modules-load[240]: Inserted module 'overlay' May 8 00:39:03.934443 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:03.934466 kernel: Bridge firewalling registered May 8 00:39:03.935854 systemd-modules-load[240]: Inserted module 'br_netfilter' May 8 00:39:03.936357 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:03.939715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:03.953523 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:03.955362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:03.959873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:03.962063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:03.969630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:03.971958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:03.973917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:03.975891 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:03.987504 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:39:03.990080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:03.999083 dracut-cmdline[275]: dracut-dracut-053 May 8 00:39:04.001601 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:39:04.017870 systemd-resolved[277]: Positive Trust Anchors: May 8 00:39:04.017884 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:04.017920 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:04.025309 systemd-resolved[277]: Defaulting to hostname 'linux'. May 8 00:39:04.030961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:04.032236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:04.072344 kernel: SCSI subsystem initialized May 8 00:39:04.077331 kernel: Loading iSCSI transport class v2.0-870. May 8 00:39:04.085345 kernel: iscsi: registered transport (tcp) May 8 00:39:04.098340 kernel: iscsi: registered transport (qla4xxx) May 8 00:39:04.098358 kernel: QLogic iSCSI HBA Driver May 8 00:39:04.141281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:39:04.157476 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:39:04.175339 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:39:04.175382 kernel: device-mapper: uevent: version 1.0.3 May 8 00:39:04.175394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:39:04.223367 kernel: raid6: neonx8 gen() 15798 MB/s May 8 00:39:04.240343 kernel: raid6: neonx4 gen() 15616 MB/s May 8 00:39:04.257350 kernel: raid6: neonx2 gen() 13289 MB/s May 8 00:39:04.274340 kernel: raid6: neonx1 gen() 10483 MB/s May 8 00:39:04.291340 kernel: raid6: int64x8 gen() 6953 MB/s May 8 00:39:04.308346 kernel: raid6: int64x4 gen() 7346 MB/s May 8 00:39:04.325340 kernel: raid6: int64x2 gen() 6131 MB/s May 8 00:39:04.342441 kernel: raid6: int64x1 gen() 5053 MB/s May 8 00:39:04.342459 kernel: raid6: using algorithm neonx8 gen() 15798 MB/s May 8 00:39:04.360431 kernel: raid6: .... xor() 11934 MB/s, rmw enabled May 8 00:39:04.360446 kernel: raid6: using neon recovery algorithm May 8 00:39:04.365337 kernel: xor: measuring software checksum speed May 8 00:39:04.365355 kernel: 8regs : 17916 MB/sec May 8 00:39:04.366522 kernel: 32regs : 19589 MB/sec May 8 00:39:04.367758 kernel: arm64_neon : 26604 MB/sec May 8 00:39:04.367770 kernel: xor: using function: arm64_neon (26604 MB/sec) May 8 00:39:04.418357 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:39:04.428652 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:04.436508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:04.448773 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 8 00:39:04.451995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:04.455205 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:39:04.469814 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 8 00:39:04.498420 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:04.515492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:04.554217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:04.564534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:39:04.576685 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:39:04.578986 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:04.580566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:04.582689 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:04.591480 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:39:04.600041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:04.608341 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:39:04.621021 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:39:04.621142 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:39:04.621155 kernel: GPT:9289727 != 19775487 May 8 00:39:04.621164 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:39:04.621180 kernel: GPT:9289727 != 19775487 May 8 00:39:04.621189 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:39:04.621199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:04.608646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:04.608764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:04.615849 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:04.618975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:04.619157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:04.621828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:04.634563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:04.649354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:04.652385 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) May 8 00:39:04.655580 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) May 8 00:39:04.658656 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:39:04.666755 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:39:04.671136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:39:04.672451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:39:04.678808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:39:04.691501 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:39:04.693417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:04.711622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:04.715030 disk-uuid[551]: Primary Header is updated. May 8 00:39:04.715030 disk-uuid[551]: Secondary Entries is updated. May 8 00:39:04.715030 disk-uuid[551]: Secondary Header is updated. May 8 00:39:04.723337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:05.733340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:05.733390 disk-uuid[561]: The operation has completed successfully. May 8 00:39:05.754399 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:39:05.754489 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:39:05.783480 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:39:05.786964 sh[574]: Success May 8 00:39:05.813362 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:39:05.854176 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:39:05.863226 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:39:05.864770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:39:05.875455 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:39:05.875491 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:39:05.875504 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:39:05.877545 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:39:05.877572 kernel: BTRFS info (device dm-0): using free space tree May 8 00:39:05.882653 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:39:05.883718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:39:05.896433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:39:05.897978 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:39:05.909983 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:39:05.910023 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:39:05.910034 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:05.913473 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:05.928671 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:39:05.930579 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:39:05.936496 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:39:05.946461 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:39:06.016573 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:06.025475 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:06.068195 systemd-networkd[763]: lo: Link UP May 8 00:39:06.068207 systemd-networkd[763]: lo: Gained carrier May 8 00:39:06.068900 systemd-networkd[763]: Enumeration completed May 8 00:39:06.069169 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:06.069388 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:06.069391 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:06.070226 systemd-networkd[763]: eth0: Link UP May 8 00:39:06.070229 systemd-networkd[763]: eth0: Gained carrier May 8 00:39:06.070236 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:06.071506 systemd[1]: Reached target network.target - Network. May 8 00:39:06.088856 ignition[675]: Ignition 2.19.0 May 8 00:39:06.088866 ignition[675]: Stage: fetch-offline May 8 00:39:06.088901 ignition[675]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:06.088913 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:06.091381 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:39:06.089063 ignition[675]: parsed url from cmdline: "" May 8 00:39:06.089066 ignition[675]: no config URL provided May 8 00:39:06.089071 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:06.089077 ignition[675]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:06.089108 ignition[675]: op(1): [started] loading QEMU firmware config module May 8 00:39:06.089112 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:39:06.102973 ignition[675]: op(1): [finished] loading QEMU firmware config module May 8 00:39:06.123020 ignition[675]: parsing config with SHA512: fc27444d71b853725f4e6e7116d8ee918d286c3d69b26fb618bb56d8542fb8aeca9c873e01c586affa27b782a89a2304b3268f379893cad928bb447665956e5e May 8 00:39:06.126816 unknown[675]: fetched base config from "system" May 8 00:39:06.126826 unknown[675]: fetched user config from "qemu" May 8 00:39:06.127212 ignition[675]: fetch-offline: fetch-offline passed May 8 00:39:06.129063 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:06.127287 ignition[675]: Ignition finished successfully May 8 00:39:06.130591 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:39:06.139489 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:39:06.150559 ignition[774]: Ignition 2.19.0 May 8 00:39:06.150568 ignition[774]: Stage: kargs May 8 00:39:06.150730 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:06.150738 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:06.151583 ignition[774]: kargs: kargs passed May 8 00:39:06.155365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:39:06.151626 ignition[774]: Ignition finished successfully May 8 00:39:06.167497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:39:06.176752 ignition[783]: Ignition 2.19.0 May 8 00:39:06.176762 ignition[783]: Stage: disks May 8 00:39:06.176917 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:06.176925 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:06.179141 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:39:06.177735 ignition[783]: disks: disks passed May 8 00:39:06.181239 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:39:06.177776 ignition[783]: Ignition finished successfully May 8 00:39:06.182987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:39:06.184709 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:06.186582 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:06.188241 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:06.201475 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:39:06.211223 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:39:06.211270 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.129 May 8 00:39:06.211279 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. May 8 00:39:06.215066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:39:06.217769 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:39:06.265341 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:39:06.265366 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:39:06.266644 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:39:06.280408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:06.282124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:39:06.283655 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:39:06.288336 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) May 8 00:39:06.283694 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:39:06.283715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:06.294470 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:39:06.294516 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:39:06.294541 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:06.291148 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:39:06.296995 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:06.309462 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:39:06.311301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:06.347019 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:39:06.350006 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 8 00:39:06.352934 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:39:06.355897 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:39:06.424497 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:39:06.435450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:39:06.437853 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:39:06.443337 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:39:06.456846 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:39:06.459372 ignition[914]: INFO : Ignition 2.19.0 May 8 00:39:06.459372 ignition[914]: INFO : Stage: mount May 8 00:39:06.460980 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:06.460980 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:06.460980 ignition[914]: INFO : mount: mount passed May 8 00:39:06.460980 ignition[914]: INFO : Ignition finished successfully May 8 00:39:06.463121 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:39:06.477471 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:39:06.874425 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:39:06.882586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:06.889124 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 8 00:39:06.889151 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:39:06.889162 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:39:06.890756 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:06.893331 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:06.894187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:06.909746 ignition[945]: INFO : Ignition 2.19.0 May 8 00:39:06.909746 ignition[945]: INFO : Stage: files May 8 00:39:06.911418 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:06.911418 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:06.911418 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 00:39:06.914802 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:39:06.914802 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:39:06.917861 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:39:06.919237 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:39:06.919237 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:39:06.918330 unknown[945]: wrote ssh authorized keys file for user: core May 8 00:39:06.923154 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:39:06.923154 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:39:06.971758 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:39:07.193511 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:39:07.193511 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:39:07.198285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:39:07.535554 systemd-networkd[763]: eth0: Gained IPv6LL May 8 00:39:07.541478 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:39:08.247122 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:39:08.247122 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:39:08.250838 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:39:08.270182 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:08.273936 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:08.275501 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:39:08.275501 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:39:08.275501 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:39:08.275501 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:08.275501 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:08.275501 ignition[945]: INFO : files: files passed May 8 00:39:08.275501 ignition[945]: INFO : Ignition finished successfully May 8 00:39:08.277364 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:39:08.287490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:39:08.289458 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:39:08.292795 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:39:08.292878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:39:08.298304 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:39:08.300732 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:08.300732 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:08.303800 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:08.303010 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:08.305448 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:39:08.316514 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:39:08.341264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:39:08.342386 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:39:08.344719 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:39:08.346479 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:39:08.348305 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:39:08.357497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:39:08.370365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:08.372859 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:39:08.384622 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:08.385870 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:08.387901 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:39:08.389641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:39:08.389771 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:08.392249 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:39:08.393596 systemd[1]: Stopped target basic.target - Basic System. May 8 00:39:08.395366 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:39:08.397210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:08.398990 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:39:08.401050 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:39:08.402900 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:08.404904 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:39:08.406633 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:39:08.408522 systemd[1]: Stopped target swap.target - Swaps. May 8 00:39:08.410029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:39:08.410174 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:08.412687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:08.413817 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:08.415692 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:39:08.416383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:08.417624 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:39:08.417760 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:39:08.420373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:39:08.420496 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:08.422535 systemd[1]: Stopped target paths.target - Path Units. May 8 00:39:08.424358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:39:08.427356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:08.429488 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:39:08.430989 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:39:08.432894 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:39:08.432989 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:08.435100 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:39:08.435194 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:08.436718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:39:08.436841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:08.438664 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:39:08.438776 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:39:08.448514 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:39:08.449408 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:39:08.449556 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:08.454566 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:39:08.455433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:39:08.455575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:08.458444 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:39:08.463229 ignition[1000]: INFO : Ignition 2.19.0 May 8 00:39:08.463229 ignition[1000]: INFO : Stage: umount May 8 00:39:08.463229 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:08.463229 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:08.463229 ignition[1000]: INFO : umount: umount passed May 8 00:39:08.463229 ignition[1000]: INFO : Ignition finished successfully May 8 00:39:08.458553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:08.468616 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:39:08.468708 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:39:08.470877 systemd[1]: Stopped target network.target - Network. May 8 00:39:08.472481 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:39:08.472550 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:39:08.474519 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:39:08.474574 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:39:08.476249 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:39:08.476300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:39:08.478158 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:39:08.478209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:39:08.480373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:39:08.482212 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:39:08.484736 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:39:08.485361 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:39:08.485461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:39:08.489683 systemd-networkd[763]: eth0: DHCPv6 lease lost May 8 00:39:08.491953 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:39:08.492068 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:39:08.494012 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:39:08.494118 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:39:08.497486 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:39:08.497539 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:08.508458 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:39:08.509362 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:39:08.509484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:08.511456 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:39:08.511507 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:08.513380 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:39:08.513429 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:39:08.515254 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:39:08.515302 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:08.517746 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:08.530176 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:39:08.530338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:39:08.533599 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:39:08.533710 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:39:08.536718 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:39:08.536851 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:08.539790 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:39:08.539863 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:39:08.541531 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:39:08.541579 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:08.543420 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:39:08.543474 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:08.546162 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:39:08.546214 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:39:08.548853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:08.548903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:08.551726 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:39:08.551774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:39:08.564486 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:39:08.565537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:39:08.565611 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:08.567685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:08.567734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:08.573053 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:39:08.573168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:39:08.575565 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:39:08.578147 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:39:08.588400 systemd[1]: Switching root. May 8 00:39:08.615600 systemd-journald[239]: Journal stopped May 8 00:39:09.350302 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 8 00:39:09.350381 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:39:09.350399 kernel: SELinux: policy capability open_perms=1 May 8 00:39:09.350412 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:39:09.350421 kernel: SELinux: policy capability always_check_network=0 May 8 00:39:09.350433 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:39:09.350443 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:39:09.350452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:39:09.350462 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:39:09.350474 kernel: audit: type=1403 audit(1746664748.758:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:39:09.350485 systemd[1]: Successfully loaded SELinux policy in 37.679ms. May 8 00:39:09.350498 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.646ms. May 8 00:39:09.350509 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:39:09.350520 systemd[1]: Detected virtualization kvm. May 8 00:39:09.350533 systemd[1]: Detected architecture arm64. May 8 00:39:09.350544 systemd[1]: Detected first boot. May 8 00:39:09.350554 systemd[1]: Initializing machine ID from VM UUID. May 8 00:39:09.350564 zram_generator::config[1046]: No configuration found. May 8 00:39:09.350575 systemd[1]: Populated /etc with preset unit settings. May 8 00:39:09.350585 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:39:09.350596 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:39:09.350606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:39:09.350620 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:39:09.350631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:39:09.350641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:39:09.350652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:39:09.350662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:39:09.350673 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:39:09.350684 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:39:09.350693 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:39:09.350704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:09.350716 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:09.350726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:39:09.350736 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:39:09.350747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:39:09.350757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:09.350767 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:39:09.350777 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:09.350788 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:39:09.350798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:39:09.350810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:39:09.350821 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:39:09.350831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:09.350847 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:09.350857 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:09.350867 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:09.350877 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:39:09.350888 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:39:09.350899 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:09.350910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:09.350921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:09.350932 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:39:09.350942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:39:09.350953 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:39:09.350963 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:39:09.350974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:39:09.350989 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:39:09.351001 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:39:09.351012 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:39:09.351023 systemd[1]: Reached target machines.target - Containers. May 8 00:39:09.351034 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:39:09.351045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:09.351056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:09.351072 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:39:09.351084 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:09.351096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:09.351107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:09.351117 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:39:09.351127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:09.351138 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:39:09.351148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:39:09.351159 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:39:09.351169 kernel: fuse: init (API version 7.39) May 8 00:39:09.351179 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:39:09.351190 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:39:09.351201 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:09.351213 kernel: loop: module loaded May 8 00:39:09.351223 kernel: ACPI: bus type drm_connector registered May 8 00:39:09.351232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:09.351243 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:39:09.351253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:39:09.351263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:09.351275 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:39:09.351287 systemd[1]: Stopped verity-setup.service. May 8 00:39:09.351315 systemd-journald[1124]: Collecting audit messages is disabled. May 8 00:39:09.351365 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:39:09.351378 systemd-journald[1124]: Journal started May 8 00:39:09.351400 systemd-journald[1124]: Runtime Journal (/run/log/journal/b4ac6b4ee55641e2aef5155eb1e9b2ca) is 5.9M, max 47.3M, 41.4M free. May 8 00:39:09.114243 systemd[1]: Queued start job for default target multi-user.target. May 8 00:39:09.138414 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:39:09.138784 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:39:09.353680 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:09.354369 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:39:09.355655 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:39:09.356779 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:39:09.358088 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:39:09.359340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:39:09.361383 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:39:09.362780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:09.364312 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:39:09.364477 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:39:09.365948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:09.366097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:09.367569 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:09.367707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:09.369182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:09.369314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:09.370921 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:39:09.371103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:39:09.372654 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:09.372803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:09.374228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:09.375646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:39:09.377444 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:39:09.394439 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:39:09.401461 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:39:09.403706 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:39:09.404862 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:39:09.404909 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:09.407317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:39:09.409679 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:39:09.411945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:39:09.413124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:09.414679 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:39:09.416956 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:39:09.418181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:09.424563 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:39:09.425892 systemd-journald[1124]: Time spent on flushing to /var/log/journal/b4ac6b4ee55641e2aef5155eb1e9b2ca is 20.965ms for 852 entries. May 8 00:39:09.425892 systemd-journald[1124]: System Journal (/var/log/journal/b4ac6b4ee55641e2aef5155eb1e9b2ca) is 8.0M, max 195.6M, 187.6M free. May 8 00:39:09.453110 systemd-journald[1124]: Received client request to flush runtime journal. May 8 00:39:09.426302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:09.428118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:09.443564 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:39:09.449566 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:39:09.452290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:09.454224 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:39:09.455591 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:39:09.457271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:39:09.460408 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:39:09.462252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:39:09.466524 kernel: loop0: detected capacity change from 0 to 114328 May 8 00:39:09.469762 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:39:09.478599 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:39:09.480352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:39:09.481507 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:39:09.484771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:09.492821 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:39:09.497633 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:39:09.498294 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:39:09.511710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:09.513493 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:39:09.518411 kernel: loop1: detected capacity change from 0 to 194096 May 8 00:39:09.540335 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:39:09.540352 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:39:09.544878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:09.564362 kernel: loop2: detected capacity change from 0 to 114432 May 8 00:39:09.607351 kernel: loop3: detected capacity change from 0 to 114328 May 8 00:39:09.612342 kernel: loop4: detected capacity change from 0 to 194096 May 8 00:39:09.618353 kernel: loop5: detected capacity change from 0 to 114432 May 8 00:39:09.627851 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:39:09.628418 (sd-merge)[1181]: Merged extensions into '/usr'. May 8 00:39:09.638784 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:39:09.638983 systemd[1]: Reloading... May 8 00:39:09.701366 zram_generator::config[1207]: No configuration found. May 8 00:39:09.706706 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:39:09.796297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:09.832896 systemd[1]: Reloading finished in 193 ms. May 8 00:39:09.860311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:39:09.861873 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:39:09.876527 systemd[1]: Starting ensure-sysext.service... May 8 00:39:09.878591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:09.893224 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... May 8 00:39:09.893241 systemd[1]: Reloading... May 8 00:39:09.898708 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:39:09.898967 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:39:09.899630 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:39:09.899842 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 8 00:39:09.899894 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 8 00:39:09.902257 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:09.902269 systemd-tmpfiles[1242]: Skipping /boot May 8 00:39:09.909660 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:09.909677 systemd-tmpfiles[1242]: Skipping /boot May 8 00:39:09.935348 zram_generator::config[1268]: No configuration found. May 8 00:39:10.025587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:10.062109 systemd[1]: Reloading finished in 168 ms. May 8 00:39:10.079831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:39:10.091800 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:10.099969 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:39:10.102964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:39:10.105491 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:39:10.108633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:10.115497 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:10.118728 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:39:10.124238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:10.125612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:10.128823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:10.131168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:10.132459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:10.135072 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:39:10.138786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:39:10.142615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:10.142772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:10.144291 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:39:10.146013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:10.146220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:10.150869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:10.153460 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:39:10.160673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:10.162349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:10.162491 systemd-udevd[1311]: Using default interface naming scheme 'v255'. May 8 00:39:10.164273 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:10.164460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:10.166312 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:39:10.169872 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:39:10.175696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:10.188647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:10.193474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:10.194753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:10.194840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:10.194887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:39:10.195080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:10.196786 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:39:10.199419 systemd[1]: Finished ensure-sysext.service. May 8 00:39:10.203595 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:10.203719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:10.205168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:10.205285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:10.215620 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:10.216799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:10.216896 augenrules[1350]: No rules May 8 00:39:10.221332 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:39:10.222715 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:39:10.236145 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:39:10.284350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1357) May 8 00:39:10.298017 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:39:10.300727 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:39:10.311537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:39:10.312485 systemd-resolved[1310]: Positive Trust Anchors: May 8 00:39:10.312507 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:10.312540 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:10.319362 systemd-resolved[1310]: Defaulting to hostname 'linux'. May 8 00:39:10.320534 systemd-networkd[1372]: lo: Link UP May 8 00:39:10.320545 systemd-networkd[1372]: lo: Gained carrier May 8 00:39:10.321261 systemd-networkd[1372]: Enumeration completed May 8 00:39:10.323769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:39:10.325397 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:10.326771 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:10.328402 systemd[1]: Reached target network.target - Network. May 8 00:39:10.329443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:10.330524 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:10.330533 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:10.331258 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:10.331293 systemd-networkd[1372]: eth0: Link UP May 8 00:39:10.331296 systemd-networkd[1372]: eth0: Gained carrier May 8 00:39:10.331305 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:10.332214 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:39:10.337370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:39:10.345377 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:39:10.346407 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 8 00:39:10.347106 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:39:10.347155 systemd-timesyncd[1373]: Initial clock synchronization to Thu 2025-05-08 00:39:10.199435 UTC. May 8 00:39:10.375050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:10.384027 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:39:10.386879 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:39:10.410356 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:10.413430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:10.442902 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:39:10.446534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:10.447663 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:10.448804 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:39:10.450053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:39:10.451463 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:39:10.452614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:39:10.453848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:39:10.455099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:39:10.455143 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:10.456055 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:10.459385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:39:10.461932 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:39:10.472449 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:39:10.474881 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:39:10.476590 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:39:10.477787 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:10.478764 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:10.479746 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:10.479779 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:10.480818 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:39:10.482860 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:39:10.483468 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:10.486489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:39:10.491316 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:39:10.492513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:39:10.496612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:39:10.497693 jq[1403]: false May 8 00:39:10.499460 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:39:10.504845 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:39:10.510260 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:39:10.511557 extend-filesystems[1404]: Found loop3 May 8 00:39:10.513455 extend-filesystems[1404]: Found loop4 May 8 00:39:10.513455 extend-filesystems[1404]: Found loop5 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda May 8 00:39:10.513455 extend-filesystems[1404]: Found vda1 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda2 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda3 May 8 00:39:10.513455 extend-filesystems[1404]: Found usr May 8 00:39:10.513455 extend-filesystems[1404]: Found vda4 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda6 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda7 May 8 00:39:10.513455 extend-filesystems[1404]: Found vda9 May 8 00:39:10.513455 extend-filesystems[1404]: Checking size of /dev/vda9 May 8 00:39:10.516655 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:39:10.521736 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:39:10.522234 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:39:10.531430 dbus-daemon[1402]: [system] SELinux support is enabled May 8 00:39:10.537018 extend-filesystems[1404]: Resized partition /dev/vda9 May 8 00:39:10.533849 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:39:10.537795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:39:10.542343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) May 8 00:39:10.542550 extend-filesystems[1424]: resize2fs 1.47.1 (20-May-2024) May 8 00:39:10.543487 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:39:10.553362 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:39:10.554905 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:39:10.558578 jq[1425]: true May 8 00:39:10.567245 update_engine[1420]: I20250508 00:39:10.567023 1420 main.cc:92] Flatcar Update Engine starting May 8 00:39:10.575270 update_engine[1420]: I20250508 00:39:10.570497 1420 update_check_scheduler.cc:74] Next update check in 5m9s May 8 00:39:10.571779 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:39:10.571942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:39:10.572213 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:39:10.572362 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:39:10.576686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:39:10.576874 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:39:10.585201 jq[1429]: true May 8 00:39:10.586872 (ntainerd)[1430]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:39:10.597780 systemd[1]: Started update-engine.service - Update Engine. May 8 00:39:10.599595 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:39:10.599639 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:39:10.601462 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:39:10.601491 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:39:10.612536 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:39:10.613995 tar[1428]: linux-arm64/helm May 8 00:39:10.615641 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:39:10.618541 systemd-logind[1417]: New seat seat0. May 8 00:39:10.619560 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:39:10.631603 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:39:10.647309 extend-filesystems[1424]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:39:10.647309 extend-filesystems[1424]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:39:10.647309 extend-filesystems[1424]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:39:10.651436 extend-filesystems[1404]: Resized filesystem in /dev/vda9 May 8 00:39:10.650370 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:39:10.650554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:39:10.655659 bash[1456]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:10.657403 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:39:10.659167 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:39:10.662270 locksmithd[1446]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:39:10.787905 containerd[1430]: time="2025-05-08T00:39:10.786687880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:39:10.813233 containerd[1430]: time="2025-05-08T00:39:10.812837680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814381520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814414200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814429680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814577960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814594640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814649000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814660320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814812520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814827560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814841240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:10.815509 containerd[1430]: time="2025-05-08T00:39:10.814850600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.814924280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.815121280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.815213240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.815226600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.815302280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:39:10.815757 containerd[1430]: time="2025-05-08T00:39:10.815366040Z" level=info msg="metadata content store policy set" policy=shared May 8 00:39:10.819099 containerd[1430]: time="2025-05-08T00:39:10.819057920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:39:10.819254 containerd[1430]: time="2025-05-08T00:39:10.819235880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:39:10.819454 containerd[1430]: time="2025-05-08T00:39:10.819436960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:39:10.819521 containerd[1430]: time="2025-05-08T00:39:10.819508720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:39:10.819622 containerd[1430]: time="2025-05-08T00:39:10.819607320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:39:10.819863 containerd[1430]: time="2025-05-08T00:39:10.819842440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:39:10.820487 containerd[1430]: time="2025-05-08T00:39:10.820464760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:39:10.820740 containerd[1430]: time="2025-05-08T00:39:10.820718520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:39:10.820869 containerd[1430]: time="2025-05-08T00:39:10.820852440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:39:10.820926 containerd[1430]: time="2025-05-08T00:39:10.820914040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:39:10.821039 containerd[1430]: time="2025-05-08T00:39:10.821022560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821113 containerd[1430]: time="2025-05-08T00:39:10.821098680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821214 containerd[1430]: time="2025-05-08T00:39:10.821198280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821269760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821291240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821306320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821345680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821361240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821383720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821396840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821408360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821420840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821433680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821454480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821466440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821479360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821722 containerd[1430]: time="2025-05-08T00:39:10.821492040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821506080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821517440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821529680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821549320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821566080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821587000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821599120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:39:10.821973 containerd[1430]: time="2025-05-08T00:39:10.821611240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:39:10.822438 containerd[1430]: time="2025-05-08T00:39:10.822249160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:39:10.822438 containerd[1430]: time="2025-05-08T00:39:10.822277960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:39:10.822536 containerd[1430]: time="2025-05-08T00:39:10.822517360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:39:10.822671 containerd[1430]: time="2025-05-08T00:39:10.822575360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:39:10.822671 containerd[1430]: time="2025-05-08T00:39:10.822590800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:39:10.822671 containerd[1430]: time="2025-05-08T00:39:10.822608800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:39:10.822897 containerd[1430]: time="2025-05-08T00:39:10.822618680Z" level=info msg="NRI interface is disabled by configuration." May 8 00:39:10.822897 containerd[1430]: time="2025-05-08T00:39:10.822804120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:39:10.823467 containerd[1430]: time="2025-05-08T00:39:10.823400760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.823624480Z" level=info msg="Connect containerd service" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.823664440Z" level=info msg="using legacy CRI server" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.823671840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.823750240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824354240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824779240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824783720Z" level=info msg="Start subscribing containerd event" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824842040Z" level=info msg="Start recovering state" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824818200Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824903480Z" level=info msg="Start event monitor" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824913880Z" level=info msg="Start snapshots syncer" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824922280Z" level=info msg="Start cni network conf syncer for default" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.824930840Z" level=info msg="Start streaming server" May 8 00:39:10.827352 containerd[1430]: time="2025-05-08T00:39:10.825049200Z" level=info msg="containerd successfully booted in 0.039330s" May 8 00:39:10.826187 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:39:10.960826 tar[1428]: linux-arm64/LICENSE May 8 00:39:10.960826 tar[1428]: linux-arm64/README.md May 8 00:39:10.971383 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:39:11.083893 sshd_keygen[1421]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:39:11.102266 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:39:11.116604 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:39:11.122155 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:39:11.122381 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:39:11.124990 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:39:11.137395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:39:11.140492 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:39:11.145608 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:39:11.146883 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:39:11.695457 systemd-networkd[1372]: eth0: Gained IPv6LL May 8 00:39:11.699386 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:39:11.701979 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:39:11.717884 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:39:11.720404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:11.722826 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:39:11.737339 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:39:11.737548 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:39:11.739747 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:39:11.745724 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:39:12.195696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:12.197278 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:39:12.200501 systemd[1]: Startup finished in 650ms (kernel) + 5.043s (initrd) + 3.483s (userspace) = 9.177s. May 8 00:39:12.200628 (kubelet)[1514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:12.648773 kubelet[1514]: E0508 00:39:12.648665 1514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:12.651488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:12.651630 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:16.609081 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:39:16.610164 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:48048.service - OpenSSH per-connection server daemon (10.0.0.1:48048). May 8 00:39:16.669757 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 48048 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:39:16.673333 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:16.688636 systemd-logind[1417]: New session 1 of user core. May 8 00:39:16.689655 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:39:16.701610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:39:16.710885 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:39:16.714148 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:39:16.722377 (systemd)[1532]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:16.795086 systemd[1532]: Queued start job for default target default.target. May 8 00:39:16.806227 systemd[1532]: Created slice app.slice - User Application Slice. May 8 00:39:16.806258 systemd[1532]: Reached target paths.target - Paths. May 8 00:39:16.806270 systemd[1532]: Reached target timers.target - Timers. May 8 00:39:16.807526 systemd[1532]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:39:16.831210 systemd[1532]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:39:16.831357 systemd[1532]: Reached target sockets.target - Sockets. May 8 00:39:16.831372 systemd[1532]: Reached target basic.target - Basic System. May 8 00:39:16.831427 systemd[1532]: Reached target default.target - Main User Target. May 8 00:39:16.831455 systemd[1532]: Startup finished in 103ms. May 8 00:39:16.831565 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:39:16.832936 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:39:16.893717 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). May 8 00:39:16.942097 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:39:16.943642 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:16.947925 systemd-logind[1417]: New session 2 of user core. May 8 00:39:16.965489 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:39:17.020588 sshd[1543]: pam_unix(sshd:session): session closed for user core May 8 00:39:17.039882 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:48056.service: Deactivated successfully. May 8 00:39:17.042808 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:39:17.043398 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. May 8 00:39:17.045188 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). May 8 00:39:17.045967 systemd-logind[1417]: Removed session 2. May 8 00:39:17.077701 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:39:17.079098 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:17.083157 systemd-logind[1417]: New session 3 of user core. May 8 00:39:17.090554 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:39:17.139075 sshd[1550]: pam_unix(sshd:session): session closed for user core May 8 00:39:17.148945 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:48062.service: Deactivated successfully. May 8 00:39:17.151379 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:39:17.154003 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. May 8 00:39:17.169702 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:48074.service - OpenSSH per-connection server daemon (10.0.0.1:48074). May 8 00:39:17.170540 systemd-logind[1417]: Removed session 3. May 8 00:39:17.198847 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 48074 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:39:17.200253 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:17.204384 systemd-logind[1417]: New session 4 of user core. May 8 00:39:17.218478 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:39:17.269641 sshd[1557]: pam_unix(sshd:session): session closed for user core May 8 00:39:17.282898 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:48074.service: Deactivated successfully. May 8 00:39:17.285516 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:17.286374 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:17.294635 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:48084.service - OpenSSH per-connection server daemon (10.0.0.1:48084). May 8 00:39:17.295702 systemd-logind[1417]: Removed session 4. May 8 00:39:17.325052 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:39:17.325977 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:17.330536 systemd-logind[1417]: New session 5 of user core. May 8 00:39:17.341729 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:39:17.405901 sudo[1567]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:17.406188 sudo[1567]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:17.723595 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:39:17.723662 (dockerd)[1586]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:39:17.976422 dockerd[1586]: time="2025-05-08T00:39:17.974889515Z" level=info msg="Starting up" May 8 00:39:18.124826 dockerd[1586]: time="2025-05-08T00:39:18.124736828Z" level=info msg="Loading containers: start." May 8 00:39:18.264350 kernel: Initializing XFRM netlink socket May 8 00:39:18.343097 systemd-networkd[1372]: docker0: Link UP May 8 00:39:18.363600 dockerd[1586]: time="2025-05-08T00:39:18.363559188Z" level=info msg="Loading containers: done." May 8 00:39:18.389379 dockerd[1586]: time="2025-05-08T00:39:18.389301650Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:18.389519 dockerd[1586]: time="2025-05-08T00:39:18.389416969Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:39:18.389549 dockerd[1586]: time="2025-05-08T00:39:18.389526045Z" level=info msg="Daemon has completed initialization" May 8 00:39:18.426499 dockerd[1586]: time="2025-05-08T00:39:18.426370268Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:18.426799 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:19.082528 containerd[1430]: time="2025-05-08T00:39:19.082410372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:39:19.729593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074715669.mount: Deactivated successfully. May 8 00:39:21.147895 containerd[1430]: time="2025-05-08T00:39:21.147848568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.149404 containerd[1430]: time="2025-05-08T00:39:21.149367902Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 8 00:39:21.151432 containerd[1430]: time="2025-05-08T00:39:21.150841420Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.154595 containerd[1430]: time="2025-05-08T00:39:21.154555646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:21.155828 containerd[1430]: time="2025-05-08T00:39:21.155770978Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.073311751s" May 8 00:39:21.155828 containerd[1430]: time="2025-05-08T00:39:21.155806924Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:39:21.175121 containerd[1430]: time="2025-05-08T00:39:21.175036652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:39:22.824896 containerd[1430]: time="2025-05-08T00:39:22.824844891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:22.825815 containerd[1430]: time="2025-05-08T00:39:22.825550119Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 8 00:39:22.826495 containerd[1430]: time="2025-05-08T00:39:22.826434251Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:22.829848 containerd[1430]: time="2025-05-08T00:39:22.829785146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:22.831426 containerd[1430]: time="2025-05-08T00:39:22.831389995Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.656309631s" May 8 00:39:22.831426 containerd[1430]: time="2025-05-08T00:39:22.831424716Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:39:22.849411 containerd[1430]: time="2025-05-08T00:39:22.849372394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:39:22.901917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:39:22.911504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:23.002467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:23.006188 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:23.042487 kubelet[1819]: E0508 00:39:23.042424 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:23.045511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:23.045660 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:23.881799 containerd[1430]: time="2025-05-08T00:39:23.881755031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.882271 containerd[1430]: time="2025-05-08T00:39:23.882236535Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 8 00:39:23.882977 containerd[1430]: time="2025-05-08T00:39:23.882946344Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.885741 containerd[1430]: time="2025-05-08T00:39:23.885703373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:23.886903 containerd[1430]: time="2025-05-08T00:39:23.886874174Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.037466935s" May 8 00:39:23.886963 containerd[1430]: time="2025-05-08T00:39:23.886903767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:39:23.906130 containerd[1430]: time="2025-05-08T00:39:23.906051931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:39:24.846808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802829992.mount: Deactivated successfully. May 8 00:39:25.355087 containerd[1430]: time="2025-05-08T00:39:25.354707154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.382684 containerd[1430]: time="2025-05-08T00:39:25.382645446Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 00:39:25.383591 containerd[1430]: time="2025-05-08T00:39:25.383570019Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.385517 containerd[1430]: time="2025-05-08T00:39:25.385458854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:25.386333 containerd[1430]: time="2025-05-08T00:39:25.386288463Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.4801959s" May 8 00:39:25.386412 containerd[1430]: time="2025-05-08T00:39:25.386335048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:39:25.407831 containerd[1430]: time="2025-05-08T00:39:25.407791013Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:39:26.015367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591691656.mount: Deactivated successfully. May 8 00:39:26.600003 containerd[1430]: time="2025-05-08T00:39:26.599939818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.600755 containerd[1430]: time="2025-05-08T00:39:26.600706443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 00:39:26.601312 containerd[1430]: time="2025-05-08T00:39:26.601262847Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.606273 containerd[1430]: time="2025-05-08T00:39:26.606229388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:26.607066 containerd[1430]: time="2025-05-08T00:39:26.607023596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.199189939s" May 8 00:39:26.607104 containerd[1430]: time="2025-05-08T00:39:26.607067641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:39:26.625163 containerd[1430]: time="2025-05-08T00:39:26.625125440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:39:27.107689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577040465.mount: Deactivated successfully. May 8 00:39:27.112472 containerd[1430]: time="2025-05-08T00:39:27.111696636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:27.113503 containerd[1430]: time="2025-05-08T00:39:27.113476292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 8 00:39:27.114276 containerd[1430]: time="2025-05-08T00:39:27.114250142Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:27.116299 containerd[1430]: time="2025-05-08T00:39:27.116265701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:27.117424 containerd[1430]: time="2025-05-08T00:39:27.117378873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 492.219152ms" May 8 00:39:27.117424 containerd[1430]: time="2025-05-08T00:39:27.117416071Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:39:27.135635 containerd[1430]: time="2025-05-08T00:39:27.135593177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:39:27.687408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483662319.mount: Deactivated successfully. May 8 00:39:30.294974 containerd[1430]: time="2025-05-08T00:39:30.294928961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:30.295725 containerd[1430]: time="2025-05-08T00:39:30.295698912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 8 00:39:30.296816 containerd[1430]: time="2025-05-08T00:39:30.296750778Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:30.301355 containerd[1430]: time="2025-05-08T00:39:30.299769546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:30.302060 containerd[1430]: time="2025-05-08T00:39:30.302029335Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.166398756s" May 8 00:39:30.302159 containerd[1430]: time="2025-05-08T00:39:30.302141271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:39:33.142549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:39:33.150498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:33.276061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:33.279858 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:33.320415 kubelet[2040]: E0508 00:39:33.320366 2040 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:33.323018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:33.323160 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:35.642605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:35.652644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:35.666754 systemd[1]: Reloading requested from client PID 2055 ('systemctl') (unit session-5.scope)... May 8 00:39:35.666770 systemd[1]: Reloading... May 8 00:39:35.730393 zram_generator::config[2091]: No configuration found. May 8 00:39:35.837386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:35.890708 systemd[1]: Reloading finished in 223 ms. May 8 00:39:35.929516 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:39:35.929584 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:39:35.929788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:35.931332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:36.020470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:36.023940 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:36.061480 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:36.061480 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:36.061480 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:36.062424 kubelet[2139]: I0508 00:39:36.062380 2139 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:36.422853 kubelet[2139]: I0508 00:39:36.422824 2139 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:36.423059 kubelet[2139]: I0508 00:39:36.422989 2139 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:36.424348 kubelet[2139]: I0508 00:39:36.423257 2139 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:36.458164 kubelet[2139]: E0508 00:39:36.458138 2139 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.458631 kubelet[2139]: I0508 00:39:36.458604 2139 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:36.468214 kubelet[2139]: I0508 00:39:36.468176 2139 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:36.468606 kubelet[2139]: I0508 00:39:36.468568 2139 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:36.468766 kubelet[2139]: I0508 00:39:36.468598 2139 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:36.468838 kubelet[2139]: I0508 00:39:36.468828 2139 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:36.468838 kubelet[2139]: I0508 00:39:36.468837 2139 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:36.469092 kubelet[2139]: I0508 00:39:36.469070 2139 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:36.471734 kubelet[2139]: I0508 00:39:36.471706 2139 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:36.471734 kubelet[2139]: I0508 00:39:36.471726 2139 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:36.472033 kubelet[2139]: I0508 00:39:36.472019 2139 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:36.472176 kubelet[2139]: I0508 00:39:36.472156 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:36.472525 kubelet[2139]: W0508 00:39:36.472427 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.472525 kubelet[2139]: E0508 00:39:36.472491 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.472640 kubelet[2139]: W0508 00:39:36.472518 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.472640 kubelet[2139]: E0508 00:39:36.472558 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.473146 kubelet[2139]: I0508 00:39:36.473131 2139 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:36.473497 kubelet[2139]: I0508 00:39:36.473486 2139 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:36.473601 kubelet[2139]: W0508 00:39:36.473590 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:39:36.474380 kubelet[2139]: I0508 00:39:36.474356 2139 server.go:1264] "Started kubelet" May 8 00:39:36.475596 kubelet[2139]: I0508 00:39:36.475454 2139 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:36.475681 kubelet[2139]: I0508 00:39:36.475586 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:36.475897 kubelet[2139]: I0508 00:39:36.475878 2139 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:36.477459 kubelet[2139]: E0508 00:39:36.477146 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d665c27704f04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:36.474316548 +0000 UTC m=+0.447281520,LastTimestamp:2025-05-08 00:39:36.474316548 +0000 UTC m=+0.447281520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:36.478207 kubelet[2139]: I0508 00:39:36.477706 2139 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:36.479389 kubelet[2139]: I0508 00:39:36.479370 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:36.481522 kubelet[2139]: E0508 00:39:36.481495 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:36.481759 kubelet[2139]: I0508 00:39:36.481749 2139 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:36.482817 kubelet[2139]: I0508 00:39:36.482032 2139 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:36.482817 kubelet[2139]: I0508 00:39:36.482310 2139 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:36.482817 kubelet[2139]: W0508 00:39:36.482578 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.482817 kubelet[2139]: E0508 00:39:36.482618 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.483415 kubelet[2139]: E0508 00:39:36.483382 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" May 8 00:39:36.484059 kubelet[2139]: I0508 00:39:36.484020 2139 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:36.484121 kubelet[2139]: I0508 00:39:36.484095 2139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:36.485856 kubelet[2139]: I0508 00:39:36.485835 2139 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:36.486299 kubelet[2139]: E0508 00:39:36.486274 2139 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:36.499142 kubelet[2139]: I0508 00:39:36.498999 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:36.500146 kubelet[2139]: I0508 00:39:36.500117 2139 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:36.500282 kubelet[2139]: I0508 00:39:36.500271 2139 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:36.500462 kubelet[2139]: I0508 00:39:36.500388 2139 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:36.500503 kubelet[2139]: I0508 00:39:36.500480 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:36.500654 kubelet[2139]: I0508 00:39:36.500631 2139 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:36.500654 kubelet[2139]: I0508 00:39:36.500649 2139 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:36.500711 kubelet[2139]: E0508 00:39:36.500684 2139 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:36.501331 kubelet[2139]: W0508 00:39:36.501214 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.501331 kubelet[2139]: E0508 00:39:36.501269 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:36.568075 kubelet[2139]: I0508 00:39:36.567978 2139 policy_none.go:49] "None policy: Start" May 8 00:39:36.568787 kubelet[2139]: I0508 00:39:36.568746 2139 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:36.568787 kubelet[2139]: I0508 00:39:36.568817 2139 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:36.574281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:39:36.582990 kubelet[2139]: I0508 00:39:36.582959 2139 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:36.583948 kubelet[2139]: E0508 00:39:36.583277 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 8 00:39:36.583973 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:39:36.586472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:39:36.596148 kubelet[2139]: I0508 00:39:36.596126 2139 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:36.596356 kubelet[2139]: I0508 00:39:36.596310 2139 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:36.596435 kubelet[2139]: I0508 00:39:36.596424 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:36.597835 kubelet[2139]: E0508 00:39:36.597795 2139 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:39:36.601685 kubelet[2139]: I0508 00:39:36.601650 2139 topology_manager.go:215] "Topology Admit Handler" podUID="c17424a692e415d20b0f82d1bf33fc22" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:36.602567 kubelet[2139]: I0508 00:39:36.602498 2139 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:36.603284 kubelet[2139]: I0508 00:39:36.603215 2139 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:36.608629 systemd[1]: Created slice kubepods-burstable-podc17424a692e415d20b0f82d1bf33fc22.slice - libcontainer container kubepods-burstable-podc17424a692e415d20b0f82d1bf33fc22.slice. May 8 00:39:36.626049 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 8 00:39:36.644459 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 8 00:39:36.684140 kubelet[2139]: E0508 00:39:36.684049 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" May 8 00:39:36.783394 kubelet[2139]: I0508 00:39:36.783284 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:36.783394 kubelet[2139]: I0508 00:39:36.783368 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:36.783394 kubelet[2139]: I0508 00:39:36.783389 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:36.783562 kubelet[2139]: I0508 00:39:36.783407 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:36.783562 kubelet[2139]: I0508 00:39:36.783515 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:36.783562 kubelet[2139]: I0508 00:39:36.783537 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:36.783562 kubelet[2139]: I0508 00:39:36.783552 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:36.783658 kubelet[2139]: I0508 00:39:36.783566 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:36.783658 kubelet[2139]: I0508 00:39:36.783584 2139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:36.784573 kubelet[2139]: I0508 00:39:36.784555 2139 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:36.784885 kubelet[2139]: E0508 00:39:36.784860 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 8 00:39:36.924401 kubelet[2139]: E0508 00:39:36.924362 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.925028 containerd[1430]: time="2025-05-08T00:39:36.924986230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c17424a692e415d20b0f82d1bf33fc22,Namespace:kube-system,Attempt:0,}" May 8 00:39:36.927791 kubelet[2139]: E0508 00:39:36.927766 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.928174 containerd[1430]: time="2025-05-08T00:39:36.928146892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:39:36.946506 kubelet[2139]: E0508 00:39:36.946411 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:36.946856 containerd[1430]: time="2025-05-08T00:39:36.946834056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:39:37.085298 kubelet[2139]: E0508 00:39:37.085254 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" May 8 00:39:37.186804 kubelet[2139]: I0508 00:39:37.186759 2139 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:37.187144 kubelet[2139]: E0508 00:39:37.187096 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" May 8 00:39:37.388383 kubelet[2139]: W0508 00:39:37.388224 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.388383 kubelet[2139]: E0508 00:39:37.388295 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.476972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315645849.mount: Deactivated successfully. May 8 00:39:37.480668 containerd[1430]: time="2025-05-08T00:39:37.480363582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:37.483282 containerd[1430]: time="2025-05-08T00:39:37.483248764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:39:37.485457 containerd[1430]: time="2025-05-08T00:39:37.485402089Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:37.486820 containerd[1430]: time="2025-05-08T00:39:37.486787219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:37.488842 containerd[1430]: time="2025-05-08T00:39:37.488799068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:37.489596 containerd[1430]: time="2025-05-08T00:39:37.489546618Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:37.491006 containerd[1430]: time="2025-05-08T00:39:37.490980983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:37.492014 containerd[1430]: time="2025-05-08T00:39:37.491941606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:37.492949 containerd[1430]: time="2025-05-08T00:39:37.492921076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.716808ms" May 8 00:39:37.494610 containerd[1430]: time="2025-05-08T00:39:37.494473717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.405238ms" May 8 00:39:37.502812 containerd[1430]: time="2025-05-08T00:39:37.502613313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.670895ms" May 8 00:39:37.616093 containerd[1430]: time="2025-05-08T00:39:37.615968610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:37.616093 containerd[1430]: time="2025-05-08T00:39:37.616021998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:37.616093 containerd[1430]: time="2025-05-08T00:39:37.616036932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.616728 containerd[1430]: time="2025-05-08T00:39:37.616588301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:37.616728 containerd[1430]: time="2025-05-08T00:39:37.616649914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:37.616728 containerd[1430]: time="2025-05-08T00:39:37.616664649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.617609 containerd[1430]: time="2025-05-08T00:39:37.617551639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.617761 containerd[1430]: time="2025-05-08T00:39:37.617365360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.619188 containerd[1430]: time="2025-05-08T00:39:37.618120297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:37.619188 containerd[1430]: time="2025-05-08T00:39:37.618185385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:37.619188 containerd[1430]: time="2025-05-08T00:39:37.618200599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.619188 containerd[1430]: time="2025-05-08T00:39:37.618269480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:37.638858 systemd[1]: Started cri-containerd-3700b3db46b8206ff45d361682ff4802b667d5e8390e4531a324045a2eaf50bf.scope - libcontainer container 3700b3db46b8206ff45d361682ff4802b667d5e8390e4531a324045a2eaf50bf. May 8 00:39:37.640946 systemd[1]: Started cri-containerd-bc463ce40dbe0aee1ec525d250d2499cfb1279e6c7d9e8ea3eea62db2b939eeb.scope - libcontainer container bc463ce40dbe0aee1ec525d250d2499cfb1279e6c7d9e8ea3eea62db2b939eeb. May 8 00:39:37.644393 systemd[1]: Started cri-containerd-aebefdfd5c9897864a9daf67dc3bb8bf146311feef2e873d6fde4e5095a36266.scope - libcontainer container aebefdfd5c9897864a9daf67dc3bb8bf146311feef2e873d6fde4e5095a36266. May 8 00:39:37.672295 containerd[1430]: time="2025-05-08T00:39:37.672106829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"3700b3db46b8206ff45d361682ff4802b667d5e8390e4531a324045a2eaf50bf\"" May 8 00:39:37.673252 kubelet[2139]: E0508 00:39:37.673215 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.674490 containerd[1430]: time="2025-05-08T00:39:37.674465200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc463ce40dbe0aee1ec525d250d2499cfb1279e6c7d9e8ea3eea62db2b939eeb\"" May 8 00:39:37.675106 kubelet[2139]: E0508 00:39:37.675057 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.676848 containerd[1430]: time="2025-05-08T00:39:37.676782282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c17424a692e415d20b0f82d1bf33fc22,Namespace:kube-system,Attempt:0,} returns sandbox id \"aebefdfd5c9897864a9daf67dc3bb8bf146311feef2e873d6fde4e5095a36266\"" May 8 00:39:37.679369 kubelet[2139]: E0508 00:39:37.679259 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:37.679727 containerd[1430]: time="2025-05-08T00:39:37.679677048Z" level=info msg="CreateContainer within sandbox \"bc463ce40dbe0aee1ec525d250d2499cfb1279e6c7d9e8ea3eea62db2b939eeb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:39:37.679853 containerd[1430]: time="2025-05-08T00:39:37.679695017Z" level=info msg="CreateContainer within sandbox \"3700b3db46b8206ff45d361682ff4802b667d5e8390e4531a324045a2eaf50bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:39:37.681835 containerd[1430]: time="2025-05-08T00:39:37.681810407Z" level=info msg="CreateContainer within sandbox \"aebefdfd5c9897864a9daf67dc3bb8bf146311feef2e873d6fde4e5095a36266\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:39:37.690600 kubelet[2139]: W0508 00:39:37.690548 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.690600 kubelet[2139]: E0508 00:39:37.690606 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.696835 containerd[1430]: time="2025-05-08T00:39:37.696702153Z" level=info msg="CreateContainer within sandbox \"bc463ce40dbe0aee1ec525d250d2499cfb1279e6c7d9e8ea3eea62db2b939eeb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"847f5108162634f7b1d621e32acfb06b9bbd8236984fb60b2847280e28debef8\"" May 8 00:39:37.697406 containerd[1430]: time="2025-05-08T00:39:37.697316413Z" level=info msg="StartContainer for \"847f5108162634f7b1d621e32acfb06b9bbd8236984fb60b2847280e28debef8\"" May 8 00:39:37.698623 containerd[1430]: time="2025-05-08T00:39:37.698526285Z" level=info msg="CreateContainer within sandbox \"3700b3db46b8206ff45d361682ff4802b667d5e8390e4531a324045a2eaf50bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b277220693f9f0d43e30f8159d88ffc8226d2e5f7f8a123255f84f799f315e8d\"" May 8 00:39:37.699025 containerd[1430]: time="2025-05-08T00:39:37.698860948Z" level=info msg="StartContainer for \"b277220693f9f0d43e30f8159d88ffc8226d2e5f7f8a123255f84f799f315e8d\"" May 8 00:39:37.699647 containerd[1430]: time="2025-05-08T00:39:37.699601829Z" level=info msg="CreateContainer within sandbox \"aebefdfd5c9897864a9daf67dc3bb8bf146311feef2e873d6fde4e5095a36266\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"637f831e2a10aea9b2907b8adc767d11d5a0331c269aedaec4b22ffb322cf4ff\"" May 8 00:39:37.699934 containerd[1430]: time="2025-05-08T00:39:37.699912254Z" level=info msg="StartContainer for \"637f831e2a10aea9b2907b8adc767d11d5a0331c269aedaec4b22ffb322cf4ff\"" May 8 00:39:37.724486 systemd[1]: Started cri-containerd-b277220693f9f0d43e30f8159d88ffc8226d2e5f7f8a123255f84f799f315e8d.scope - libcontainer container b277220693f9f0d43e30f8159d88ffc8226d2e5f7f8a123255f84f799f315e8d. May 8 00:39:37.728063 systemd[1]: Started cri-containerd-637f831e2a10aea9b2907b8adc767d11d5a0331c269aedaec4b22ffb322cf4ff.scope - libcontainer container 637f831e2a10aea9b2907b8adc767d11d5a0331c269aedaec4b22ffb322cf4ff. May 8 00:39:37.728892 systemd[1]: Started cri-containerd-847f5108162634f7b1d621e32acfb06b9bbd8236984fb60b2847280e28debef8.scope - libcontainer container 847f5108162634f7b1d621e32acfb06b9bbd8236984fb60b2847280e28debef8. May 8 00:39:37.755250 kubelet[2139]: W0508 00:39:37.755197 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.755433 kubelet[2139]: E0508 00:39:37.755282 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.777343 containerd[1430]: time="2025-05-08T00:39:37.777129504Z" level=info msg="StartContainer for \"847f5108162634f7b1d621e32acfb06b9bbd8236984fb60b2847280e28debef8\" returns successfully" May 8 00:39:37.777343 containerd[1430]: time="2025-05-08T00:39:37.777263033Z" level=info msg="StartContainer for \"637f831e2a10aea9b2907b8adc767d11d5a0331c269aedaec4b22ffb322cf4ff\" returns successfully" May 8 00:39:37.777343 containerd[1430]: time="2025-05-08T00:39:37.777291744Z" level=info msg="StartContainer for \"b277220693f9f0d43e30f8159d88ffc8226d2e5f7f8a123255f84f799f315e8d\" returns successfully" May 8 00:39:37.886053 kubelet[2139]: E0508 00:39:37.885976 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="1.6s" May 8 00:39:37.909673 kubelet[2139]: W0508 00:39:37.909602 2139 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.909673 kubelet[2139]: E0508 00:39:37.909654 2139 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused May 8 00:39:37.992100 kubelet[2139]: I0508 00:39:37.992035 2139 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:38.512640 kubelet[2139]: E0508 00:39:38.512601 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:38.512939 kubelet[2139]: E0508 00:39:38.512718 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:38.513018 kubelet[2139]: E0508 00:39:38.512994 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:39.512919 kubelet[2139]: E0508 00:39:39.512891 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:39.744449 kubelet[2139]: E0508 00:39:39.744413 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:39:39.847373 kubelet[2139]: I0508 00:39:39.847243 2139 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:39.857585 kubelet[2139]: E0508 00:39:39.857552 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:39.958044 kubelet[2139]: E0508 00:39:39.957992 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.058654 kubelet[2139]: E0508 00:39:40.058615 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.159089 kubelet[2139]: E0508 00:39:40.159050 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.259500 kubelet[2139]: E0508 00:39:40.259471 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.359961 kubelet[2139]: E0508 00:39:40.359924 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.460660 kubelet[2139]: E0508 00:39:40.460568 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:40.561602 kubelet[2139]: E0508 00:39:40.561563 2139 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:41.356305 kubelet[2139]: E0508 00:39:41.356274 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:41.475589 kubelet[2139]: I0508 00:39:41.475531 2139 apiserver.go:52] "Watching apiserver" May 8 00:39:41.482158 kubelet[2139]: I0508 00:39:41.482141 2139 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:41.515053 kubelet[2139]: E0508 00:39:41.515021 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:41.982747 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-5.scope)... May 8 00:39:41.982762 systemd[1]: Reloading... May 8 00:39:42.037368 zram_generator::config[2454]: No configuration found. May 8 00:39:42.164248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:42.203187 kubelet[2139]: E0508 00:39:42.203129 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:42.231611 systemd[1]: Reloading finished in 248 ms. May 8 00:39:42.261769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:42.278240 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:42.278506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:42.290646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:42.384658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:42.389030 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:42.445151 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:42.445151 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:42.445151 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:42.445151 kubelet[2493]: I0508 00:39:42.444227 2493 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:42.450064 kubelet[2493]: I0508 00:39:42.450018 2493 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:42.450064 kubelet[2493]: I0508 00:39:42.450046 2493 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:42.450233 kubelet[2493]: I0508 00:39:42.450217 2493 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:42.452966 kubelet[2493]: I0508 00:39:42.452808 2493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:39:42.454023 kubelet[2493]: I0508 00:39:42.453986 2493 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:42.458346 kubelet[2493]: I0508 00:39:42.458309 2493 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:42.458551 kubelet[2493]: I0508 00:39:42.458517 2493 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:42.458688 kubelet[2493]: I0508 00:39:42.458543 2493 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:42.458761 kubelet[2493]: I0508 00:39:42.458691 2493 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:42.458761 kubelet[2493]: I0508 00:39:42.458700 2493 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:42.458761 kubelet[2493]: I0508 00:39:42.458727 2493 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:42.458830 kubelet[2493]: I0508 00:39:42.458822 2493 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:42.458852 kubelet[2493]: I0508 00:39:42.458832 2493 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:42.458871 kubelet[2493]: I0508 00:39:42.458857 2493 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:42.458901 kubelet[2493]: I0508 00:39:42.458872 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:42.459652 kubelet[2493]: I0508 00:39:42.459382 2493 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:42.459652 kubelet[2493]: I0508 00:39:42.459532 2493 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:42.461696 kubelet[2493]: I0508 00:39:42.459893 2493 server.go:1264] "Started kubelet" May 8 00:39:42.461696 kubelet[2493]: I0508 00:39:42.461209 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:42.463364 kubelet[2493]: I0508 00:39:42.462525 2493 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:42.467243 kubelet[2493]: I0508 00:39:42.467203 2493 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:42.469374 kubelet[2493]: I0508 00:39:42.467561 2493 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:42.469374 kubelet[2493]: I0508 00:39:42.467705 2493 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:42.469374 kubelet[2493]: I0508 00:39:42.467746 2493 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:42.469374 kubelet[2493]: I0508 00:39:42.469087 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:42.469374 kubelet[2493]: I0508 00:39:42.469285 2493 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:42.479781 kubelet[2493]: I0508 00:39:42.479744 2493 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:42.479883 kubelet[2493]: I0508 00:39:42.479839 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:42.481360 kubelet[2493]: I0508 00:39:42.481314 2493 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:42.486832 kubelet[2493]: E0508 00:39:42.486801 2493 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:42.490760 kubelet[2493]: I0508 00:39:42.490724 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:42.492779 kubelet[2493]: I0508 00:39:42.492740 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:42.493432 kubelet[2493]: I0508 00:39:42.493412 2493 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:42.493484 kubelet[2493]: I0508 00:39:42.493464 2493 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:42.493537 kubelet[2493]: E0508 00:39:42.493513 2493 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:42.517875 kubelet[2493]: I0508 00:39:42.517617 2493 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:42.517875 kubelet[2493]: I0508 00:39:42.517639 2493 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:42.517875 kubelet[2493]: I0508 00:39:42.517660 2493 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:42.518287 kubelet[2493]: I0508 00:39:42.518143 2493 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:39:42.518287 kubelet[2493]: I0508 00:39:42.518164 2493 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:39:42.518287 kubelet[2493]: I0508 00:39:42.518183 2493 policy_none.go:49] "None policy: Start" May 8 00:39:42.519019 kubelet[2493]: I0508 00:39:42.518996 2493 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:42.519019 kubelet[2493]: I0508 00:39:42.519023 2493 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:42.519246 kubelet[2493]: I0508 00:39:42.519211 2493 state_mem.go:75] "Updated machine memory state" May 8 00:39:42.524923 kubelet[2493]: I0508 00:39:42.524795 2493 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:42.525024 kubelet[2493]: I0508 00:39:42.524964 2493 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:42.525195 kubelet[2493]: I0508 00:39:42.525065 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:42.571125 kubelet[2493]: I0508 00:39:42.571098 2493 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:42.581676 kubelet[2493]: I0508 00:39:42.581494 2493 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:39:42.582356 kubelet[2493]: I0508 00:39:42.581885 2493 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:42.594022 kubelet[2493]: I0508 00:39:42.593931 2493 topology_manager.go:215] "Topology Admit Handler" podUID="c17424a692e415d20b0f82d1bf33fc22" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:42.594126 kubelet[2493]: I0508 00:39:42.594076 2493 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:42.594149 kubelet[2493]: I0508 00:39:42.594140 2493 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:42.602127 kubelet[2493]: E0508 00:39:42.602032 2493 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:39:42.602609 kubelet[2493]: E0508 00:39:42.602594 2493 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:39:42.668093 kubelet[2493]: I0508 00:39:42.667984 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:42.668093 kubelet[2493]: I0508 00:39:42.668029 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:42.668093 kubelet[2493]: I0508 00:39:42.668051 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:42.668093 kubelet[2493]: I0508 00:39:42.668083 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:42.668093 kubelet[2493]: I0508 00:39:42.668102 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:42.668351 kubelet[2493]: I0508 00:39:42.668118 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c17424a692e415d20b0f82d1bf33fc22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c17424a692e415d20b0f82d1bf33fc22\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:42.668351 kubelet[2493]: I0508 00:39:42.668136 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:42.668351 kubelet[2493]: I0508 00:39:42.668175 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:42.668351 kubelet[2493]: I0508 00:39:42.668228 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:42.904392 kubelet[2493]: E0508 00:39:42.903873 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:42.904392 kubelet[2493]: E0508 00:39:42.904026 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:42.904632 kubelet[2493]: E0508 00:39:42.904597 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:43.459607 kubelet[2493]: I0508 00:39:43.459569 2493 apiserver.go:52] "Watching apiserver" May 8 00:39:43.467824 kubelet[2493]: I0508 00:39:43.467774 2493 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:43.506908 kubelet[2493]: E0508 00:39:43.506826 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:43.515675 kubelet[2493]: E0508 00:39:43.513867 2493 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:39:43.515675 kubelet[2493]: E0508 00:39:43.513922 2493 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:39:43.515675 kubelet[2493]: E0508 00:39:43.514794 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:43.515675 kubelet[2493]: E0508 00:39:43.514922 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:43.542014 kubelet[2493]: I0508 00:39:43.541955 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.541937494 podStartE2EDuration="2.541937494s" podCreationTimestamp="2025-05-08 00:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:43.527977239 +0000 UTC m=+1.135730749" watchObservedRunningTime="2025-05-08 00:39:43.541937494 +0000 UTC m=+1.149691004" May 8 00:39:43.551896 kubelet[2493]: I0508 00:39:43.551295 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.551280596 podStartE2EDuration="1.551280596s" podCreationTimestamp="2025-05-08 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:43.542451972 +0000 UTC m=+1.150205482" watchObservedRunningTime="2025-05-08 00:39:43.551280596 +0000 UTC m=+1.159034106" May 8 00:39:43.563102 kubelet[2493]: I0508 00:39:43.563041 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.563025445 podStartE2EDuration="1.563025445s" podCreationTimestamp="2025-05-08 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:43.552168956 +0000 UTC m=+1.159922465" watchObservedRunningTime="2025-05-08 00:39:43.563025445 +0000 UTC m=+1.170778955" May 8 00:39:43.751220 sudo[1567]: pam_unix(sudo:session): session closed for user root May 8 00:39:43.753090 sshd[1564]: pam_unix(sshd:session): session closed for user core May 8 00:39:43.755647 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:48084.service: Deactivated successfully. May 8 00:39:43.757253 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:43.758406 systemd[1]: session-5.scope: Consumed 6.735s CPU time, 190.8M memory peak, 0B memory swap peak. May 8 00:39:43.759503 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:43.760582 systemd-logind[1417]: Removed session 5. May 8 00:39:44.508139 kubelet[2493]: E0508 00:39:44.508097 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:44.508506 kubelet[2493]: E0508 00:39:44.508297 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:45.510672 kubelet[2493]: E0508 00:39:45.510542 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:45.511861 kubelet[2493]: E0508 00:39:45.511753 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:46.511982 kubelet[2493]: E0508 00:39:46.511882 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:50.050098 kubelet[2493]: E0508 00:39:50.049995 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:50.519488 kubelet[2493]: E0508 00:39:50.519455 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.519976 kubelet[2493]: E0508 00:39:55.519938 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.544060 kubelet[2493]: E0508 00:39:55.543977 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.030076 update_engine[1420]: I20250508 00:39:56.029996 1420 update_attempter.cc:509] Updating boot flags... May 8 00:39:56.047351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2573) May 8 00:39:56.078698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2577) May 8 00:39:56.529786 kubelet[2493]: E0508 00:39:56.529748 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.902285 kubelet[2493]: I0508 00:39:56.902243 2493 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:39:56.902753 containerd[1430]: time="2025-05-08T00:39:56.902651588Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:39:56.903090 kubelet[2493]: I0508 00:39:56.902846 2493 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:39:57.566211 kubelet[2493]: I0508 00:39:57.565898 2493 topology_manager.go:215] "Topology Admit Handler" podUID="503b6c02-a929-4e25-8f70-dfb759309dbd" podNamespace="kube-system" podName="kube-proxy-5wbrx" May 8 00:39:57.566711 kubelet[2493]: I0508 00:39:57.566303 2493 topology_manager.go:215] "Topology Admit Handler" podUID="05e1ecce-dded-4164-8f5a-aa3560b89afa" podNamespace="kube-flannel" podName="kube-flannel-ds-6cntk" May 8 00:39:57.582556 systemd[1]: Created slice kubepods-burstable-pod05e1ecce_dded_4164_8f5a_aa3560b89afa.slice - libcontainer container kubepods-burstable-pod05e1ecce_dded_4164_8f5a_aa3560b89afa.slice. May 8 00:39:57.590174 systemd[1]: Created slice kubepods-besteffort-pod503b6c02_a929_4e25_8f70_dfb759309dbd.slice - libcontainer container kubepods-besteffort-pod503b6c02_a929_4e25_8f70_dfb759309dbd.slice. May 8 00:39:57.767765 kubelet[2493]: I0508 00:39:57.767713 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z589l\" (UniqueName: \"kubernetes.io/projected/503b6c02-a929-4e25-8f70-dfb759309dbd-kube-api-access-z589l\") pod \"kube-proxy-5wbrx\" (UID: \"503b6c02-a929-4e25-8f70-dfb759309dbd\") " pod="kube-system/kube-proxy-5wbrx" May 8 00:39:57.767765 kubelet[2493]: I0508 00:39:57.767766 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/05e1ecce-dded-4164-8f5a-aa3560b89afa-cni\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.767957 kubelet[2493]: I0508 00:39:57.767789 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/05e1ecce-dded-4164-8f5a-aa3560b89afa-cni-plugin\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.767957 kubelet[2493]: I0508 00:39:57.767816 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk5hq\" (UniqueName: \"kubernetes.io/projected/05e1ecce-dded-4164-8f5a-aa3560b89afa-kube-api-access-mk5hq\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.767957 kubelet[2493]: I0508 00:39:57.767849 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/503b6c02-a929-4e25-8f70-dfb759309dbd-kube-proxy\") pod \"kube-proxy-5wbrx\" (UID: \"503b6c02-a929-4e25-8f70-dfb759309dbd\") " pod="kube-system/kube-proxy-5wbrx" May 8 00:39:57.767957 kubelet[2493]: I0508 00:39:57.767870 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/503b6c02-a929-4e25-8f70-dfb759309dbd-xtables-lock\") pod \"kube-proxy-5wbrx\" (UID: \"503b6c02-a929-4e25-8f70-dfb759309dbd\") " pod="kube-system/kube-proxy-5wbrx" May 8 00:39:57.767957 kubelet[2493]: I0508 00:39:57.767902 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05e1ecce-dded-4164-8f5a-aa3560b89afa-run\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.768105 kubelet[2493]: I0508 00:39:57.767921 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/05e1ecce-dded-4164-8f5a-aa3560b89afa-flannel-cfg\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.768105 kubelet[2493]: I0508 00:39:57.767936 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05e1ecce-dded-4164-8f5a-aa3560b89afa-xtables-lock\") pod \"kube-flannel-ds-6cntk\" (UID: \"05e1ecce-dded-4164-8f5a-aa3560b89afa\") " pod="kube-flannel/kube-flannel-ds-6cntk" May 8 00:39:57.768105 kubelet[2493]: I0508 00:39:57.767959 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/503b6c02-a929-4e25-8f70-dfb759309dbd-lib-modules\") pod \"kube-proxy-5wbrx\" (UID: \"503b6c02-a929-4e25-8f70-dfb759309dbd\") " pod="kube-system/kube-proxy-5wbrx" May 8 00:39:57.887026 kubelet[2493]: E0508 00:39:57.886921 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.887541 containerd[1430]: time="2025-05-08T00:39:57.887489255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6cntk,Uid:05e1ecce-dded-4164-8f5a-aa3560b89afa,Namespace:kube-flannel,Attempt:0,}" May 8 00:39:57.904949 kubelet[2493]: E0508 00:39:57.904210 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.905908 containerd[1430]: time="2025-05-08T00:39:57.905314966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wbrx,Uid:503b6c02-a929-4e25-8f70-dfb759309dbd,Namespace:kube-system,Attempt:0,}" May 8 00:39:57.924052 containerd[1430]: time="2025-05-08T00:39:57.923634282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:57.924052 containerd[1430]: time="2025-05-08T00:39:57.923709567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:57.924052 containerd[1430]: time="2025-05-08T00:39:57.923721841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.924052 containerd[1430]: time="2025-05-08T00:39:57.923820594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.945508 systemd[1]: Started cri-containerd-42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869.scope - libcontainer container 42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869. May 8 00:39:57.950042 containerd[1430]: time="2025-05-08T00:39:57.949786886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:57.950042 containerd[1430]: time="2025-05-08T00:39:57.949865009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:57.950042 containerd[1430]: time="2025-05-08T00:39:57.949876844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.950042 containerd[1430]: time="2025-05-08T00:39:57.949965442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:57.970657 systemd[1]: Started cri-containerd-87d4b56044c82e8c84a07447984780c48d16998f86a6cbb08a3adf7f3f106b99.scope - libcontainer container 87d4b56044c82e8c84a07447984780c48d16998f86a6cbb08a3adf7f3f106b99. May 8 00:39:57.979949 containerd[1430]: time="2025-05-08T00:39:57.979904611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6cntk,Uid:05e1ecce-dded-4164-8f5a-aa3560b89afa,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\"" May 8 00:39:57.981186 kubelet[2493]: E0508 00:39:57.980857 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.989298 containerd[1430]: time="2025-05-08T00:39:57.989268132Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 8 00:39:57.998607 containerd[1430]: time="2025-05-08T00:39:57.998575561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wbrx,Uid:503b6c02-a929-4e25-8f70-dfb759309dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d4b56044c82e8c84a07447984780c48d16998f86a6cbb08a3adf7f3f106b99\"" May 8 00:39:57.999224 kubelet[2493]: E0508 00:39:57.999201 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:58.000973 containerd[1430]: time="2025-05-08T00:39:58.000934403Z" level=info msg="CreateContainer within sandbox \"87d4b56044c82e8c84a07447984780c48d16998f86a6cbb08a3adf7f3f106b99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:39:58.022399 containerd[1430]: time="2025-05-08T00:39:58.022345957Z" level=info msg="CreateContainer within sandbox \"87d4b56044c82e8c84a07447984780c48d16998f86a6cbb08a3adf7f3f106b99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07dbdd7d95281cbfd79a2e828bade9d6b9d9dd30a10a72f7fa35c66973dc80eb\"" May 8 00:39:58.023048 containerd[1430]: time="2025-05-08T00:39:58.023001865Z" level=info msg="StartContainer for \"07dbdd7d95281cbfd79a2e828bade9d6b9d9dd30a10a72f7fa35c66973dc80eb\"" May 8 00:39:58.053500 systemd[1]: Started cri-containerd-07dbdd7d95281cbfd79a2e828bade9d6b9d9dd30a10a72f7fa35c66973dc80eb.scope - libcontainer container 07dbdd7d95281cbfd79a2e828bade9d6b9d9dd30a10a72f7fa35c66973dc80eb. May 8 00:39:58.076952 containerd[1430]: time="2025-05-08T00:39:58.076905673Z" level=info msg="StartContainer for \"07dbdd7d95281cbfd79a2e828bade9d6b9d9dd30a10a72f7fa35c66973dc80eb\" returns successfully" May 8 00:39:58.534376 kubelet[2493]: E0508 00:39:58.533573 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:59.101579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718215647.mount: Deactivated successfully. May 8 00:39:59.129543 containerd[1430]: time="2025-05-08T00:39:59.128871145Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:59.129855 containerd[1430]: time="2025-05-08T00:39:59.129633107Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" May 8 00:39:59.130259 containerd[1430]: time="2025-05-08T00:39:59.130229579Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:59.133007 containerd[1430]: time="2025-05-08T00:39:59.132963840Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:59.133812 containerd[1430]: time="2025-05-08T00:39:59.133770864Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.144460991s" May 8 00:39:59.133867 containerd[1430]: time="2025-05-08T00:39:59.133811247Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 8 00:39:59.135922 containerd[1430]: time="2025-05-08T00:39:59.135781386Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 8 00:39:59.144836 containerd[1430]: time="2025-05-08T00:39:59.144801269Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf\"" May 8 00:39:59.145394 containerd[1430]: time="2025-05-08T00:39:59.145367673Z" level=info msg="StartContainer for \"e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf\"" May 8 00:39:59.167521 systemd[1]: Started cri-containerd-e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf.scope - libcontainer container e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf. May 8 00:39:59.196677 containerd[1430]: time="2025-05-08T00:39:59.196621762Z" level=info msg="StartContainer for \"e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf\" returns successfully" May 8 00:39:59.197803 systemd[1]: cri-containerd-e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf.scope: Deactivated successfully. May 8 00:39:59.245188 containerd[1430]: time="2025-05-08T00:39:59.245066861Z" level=info msg="shim disconnected" id=e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf namespace=k8s.io May 8 00:39:59.245188 containerd[1430]: time="2025-05-08T00:39:59.245127196Z" level=warning msg="cleaning up after shim disconnected" id=e2fe7865756d4685961c201690d35d6af3311d48e9d5f0a66cb0ba5254f78caf namespace=k8s.io May 8 00:39:59.245188 containerd[1430]: time="2025-05-08T00:39:59.245135712Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:59.539150 kubelet[2493]: E0508 00:39:59.538638 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:59.540360 containerd[1430]: time="2025-05-08T00:39:59.540136024Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 8 00:39:59.548634 kubelet[2493]: I0508 00:39:59.548567 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5wbrx" podStartSLOduration=2.548551238 podStartE2EDuration="2.548551238s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:58.544494058 +0000 UTC m=+16.152247568" watchObservedRunningTime="2025-05-08 00:39:59.548551238 +0000 UTC m=+17.156304748" May 8 00:40:00.749388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932227614.mount: Deactivated successfully. May 8 00:40:01.227631 containerd[1430]: time="2025-05-08T00:40:01.227567032Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.228618 containerd[1430]: time="2025-05-08T00:40:01.228590777Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 8 00:40:01.230003 containerd[1430]: time="2025-05-08T00:40:01.229965953Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.234349 containerd[1430]: time="2025-05-08T00:40:01.234266139Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:01.235529 containerd[1430]: time="2025-05-08T00:40:01.235500527Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.695324719s" May 8 00:40:01.235606 containerd[1430]: time="2025-05-08T00:40:01.235532275Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 8 00:40:01.252357 containerd[1430]: time="2025-05-08T00:40:01.252295818Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:01.263608 containerd[1430]: time="2025-05-08T00:40:01.263490999Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed\"" May 8 00:40:01.265957 containerd[1430]: time="2025-05-08T00:40:01.265893600Z" level=info msg="StartContainer for \"e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed\"" May 8 00:40:01.298527 systemd[1]: Started cri-containerd-e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed.scope - libcontainer container e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed. May 8 00:40:01.337259 systemd[1]: cri-containerd-e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed.scope: Deactivated successfully. May 8 00:40:01.398450 containerd[1430]: time="2025-05-08T00:40:01.398378056Z" level=info msg="StartContainer for \"e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed\" returns successfully" May 8 00:40:01.414623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed-rootfs.mount: Deactivated successfully. May 8 00:40:01.420073 kubelet[2493]: I0508 00:40:01.420032 2493 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:40:01.422315 containerd[1430]: time="2025-05-08T00:40:01.422114366Z" level=info msg="shim disconnected" id=e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed namespace=k8s.io May 8 00:40:01.422315 containerd[1430]: time="2025-05-08T00:40:01.422165907Z" level=warning msg="cleaning up after shim disconnected" id=e2db47b39ed02a75326b2efb9197ad05f12bf1e6dcece869130be0932d8ab2ed namespace=k8s.io May 8 00:40:01.422315 containerd[1430]: time="2025-05-08T00:40:01.422175224Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:01.444629 kubelet[2493]: I0508 00:40:01.444586 2493 topology_manager.go:215] "Topology Admit Handler" podUID="ca332cdd-caf1-40d8-b908-d8dba110b3bd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wjw6q" May 8 00:40:01.444751 kubelet[2493]: I0508 00:40:01.444730 2493 topology_manager.go:215] "Topology Admit Handler" podUID="9078ed78-e20c-4901-a92a-b0964ddf9115" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fgdpt" May 8 00:40:01.451480 systemd[1]: Created slice kubepods-burstable-pod9078ed78_e20c_4901_a92a_b0964ddf9115.slice - libcontainer container kubepods-burstable-pod9078ed78_e20c_4901_a92a_b0964ddf9115.slice. May 8 00:40:01.456489 systemd[1]: Created slice kubepods-burstable-podca332cdd_caf1_40d8_b908_d8dba110b3bd.slice - libcontainer container kubepods-burstable-podca332cdd_caf1_40d8_b908_d8dba110b3bd.slice. May 8 00:40:01.546734 kubelet[2493]: E0508 00:40:01.546621 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.548708 containerd[1430]: time="2025-05-08T00:40:01.548586264Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 8 00:40:01.559915 containerd[1430]: time="2025-05-08T00:40:01.559873971Z" level=info msg="CreateContainer within sandbox \"42e6c41311a43382e8fa835aeb86cc2f3060bab8815d5f5667d3957ef0d4a869\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"fbe0f9de6fea7ebbad5e59b2e7b2cdc55f70c2e176f8d96abc1521563d17a2f1\"" May 8 00:40:01.561795 containerd[1430]: time="2025-05-08T00:40:01.561713418Z" level=info msg="StartContainer for \"fbe0f9de6fea7ebbad5e59b2e7b2cdc55f70c2e176f8d96abc1521563d17a2f1\"" May 8 00:40:01.589546 systemd[1]: Started cri-containerd-fbe0f9de6fea7ebbad5e59b2e7b2cdc55f70c2e176f8d96abc1521563d17a2f1.scope - libcontainer container fbe0f9de6fea7ebbad5e59b2e7b2cdc55f70c2e176f8d96abc1521563d17a2f1. May 8 00:40:01.595338 kubelet[2493]: I0508 00:40:01.595290 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wx9j\" (UniqueName: \"kubernetes.io/projected/9078ed78-e20c-4901-a92a-b0964ddf9115-kube-api-access-2wx9j\") pod \"coredns-7db6d8ff4d-fgdpt\" (UID: \"9078ed78-e20c-4901-a92a-b0964ddf9115\") " pod="kube-system/coredns-7db6d8ff4d-fgdpt" May 8 00:40:01.595447 kubelet[2493]: I0508 00:40:01.595359 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca332cdd-caf1-40d8-b908-d8dba110b3bd-config-volume\") pod \"coredns-7db6d8ff4d-wjw6q\" (UID: \"ca332cdd-caf1-40d8-b908-d8dba110b3bd\") " pod="kube-system/coredns-7db6d8ff4d-wjw6q" May 8 00:40:01.595447 kubelet[2493]: I0508 00:40:01.595381 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfsg\" (UniqueName: \"kubernetes.io/projected/ca332cdd-caf1-40d8-b908-d8dba110b3bd-kube-api-access-rkfsg\") pod \"coredns-7db6d8ff4d-wjw6q\" (UID: \"ca332cdd-caf1-40d8-b908-d8dba110b3bd\") " pod="kube-system/coredns-7db6d8ff4d-wjw6q" May 8 00:40:01.595447 kubelet[2493]: I0508 00:40:01.595398 2493 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9078ed78-e20c-4901-a92a-b0964ddf9115-config-volume\") pod \"coredns-7db6d8ff4d-fgdpt\" (UID: \"9078ed78-e20c-4901-a92a-b0964ddf9115\") " pod="kube-system/coredns-7db6d8ff4d-fgdpt" May 8 00:40:01.609674 containerd[1430]: time="2025-05-08T00:40:01.609565259Z" level=info msg="StartContainer for \"fbe0f9de6fea7ebbad5e59b2e7b2cdc55f70c2e176f8d96abc1521563d17a2f1\" returns successfully" May 8 00:40:01.756108 kubelet[2493]: E0508 00:40:01.755987 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.756798 containerd[1430]: time="2025-05-08T00:40:01.756745015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgdpt,Uid:9078ed78-e20c-4901-a92a-b0964ddf9115,Namespace:kube-system,Attempt:0,}" May 8 00:40:01.759879 kubelet[2493]: E0508 00:40:01.759852 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.760245 containerd[1430]: time="2025-05-08T00:40:01.760197911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wjw6q,Uid:ca332cdd-caf1-40d8-b908-d8dba110b3bd,Namespace:kube-system,Attempt:0,}" May 8 00:40:01.825062 containerd[1430]: time="2025-05-08T00:40:01.824904341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgdpt,Uid:9078ed78-e20c-4901-a92a-b0964ddf9115,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f59918b7401daca228b7139db63bbb7839cc142314235097a7ed22adfc4baff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:40:01.825262 kubelet[2493]: E0508 00:40:01.825128 2493 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f59918b7401daca228b7139db63bbb7839cc142314235097a7ed22adfc4baff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:40:01.825262 kubelet[2493]: E0508 00:40:01.825198 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f59918b7401daca228b7139db63bbb7839cc142314235097a7ed22adfc4baff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-fgdpt" May 8 00:40:01.825262 kubelet[2493]: E0508 00:40:01.825222 2493 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f59918b7401daca228b7139db63bbb7839cc142314235097a7ed22adfc4baff\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-fgdpt" May 8 00:40:01.825417 kubelet[2493]: E0508 00:40:01.825267 2493 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fgdpt_kube-system(9078ed78-e20c-4901-a92a-b0964ddf9115)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fgdpt_kube-system(9078ed78-e20c-4901-a92a-b0964ddf9115)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f59918b7401daca228b7139db63bbb7839cc142314235097a7ed22adfc4baff\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-fgdpt" podUID="9078ed78-e20c-4901-a92a-b0964ddf9115" May 8 00:40:01.837213 containerd[1430]: time="2025-05-08T00:40:01.837160854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wjw6q,Uid:ca332cdd-caf1-40d8-b908-d8dba110b3bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fd176935f2ec485b9034be722896ba1af75962cce552ee6d61c3dfb219cb564\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:40:01.837683 kubelet[2493]: E0508 00:40:01.837377 2493 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd176935f2ec485b9034be722896ba1af75962cce552ee6d61c3dfb219cb564\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 8 00:40:01.837683 kubelet[2493]: E0508 00:40:01.837418 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd176935f2ec485b9034be722896ba1af75962cce552ee6d61c3dfb219cb564\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wjw6q" May 8 00:40:01.837683 kubelet[2493]: E0508 00:40:01.837435 2493 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd176935f2ec485b9034be722896ba1af75962cce552ee6d61c3dfb219cb564\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wjw6q" May 8 00:40:01.837683 kubelet[2493]: E0508 00:40:01.837473 2493 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wjw6q_kube-system(ca332cdd-caf1-40d8-b908-d8dba110b3bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wjw6q_kube-system(ca332cdd-caf1-40d8-b908-d8dba110b3bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fd176935f2ec485b9034be722896ba1af75962cce552ee6d61c3dfb219cb564\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wjw6q" podUID="ca332cdd-caf1-40d8-b908-d8dba110b3bd" May 8 00:40:02.550981 kubelet[2493]: E0508 00:40:02.550884 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:02.562595 kubelet[2493]: I0508 00:40:02.562336 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6cntk" podStartSLOduration=2.304409604 podStartE2EDuration="5.562303913s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="2025-05-08 00:39:57.982074742 +0000 UTC m=+15.589828212" lastFinishedPulling="2025-05-08 00:40:01.239969011 +0000 UTC m=+18.847722521" observedRunningTime="2025-05-08 00:40:02.561766977 +0000 UTC m=+20.169520487" watchObservedRunningTime="2025-05-08 00:40:02.562303913 +0000 UTC m=+20.170057383" May 8 00:40:02.706637 systemd-networkd[1372]: flannel.1: Link UP May 8 00:40:02.706642 systemd-networkd[1372]: flannel.1: Gained carrier May 8 00:40:03.552079 kubelet[2493]: E0508 00:40:03.552049 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:03.791460 systemd-networkd[1372]: flannel.1: Gained IPv6LL May 8 00:40:08.853391 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:39884.service - OpenSSH per-connection server daemon (10.0.0.1:39884). May 8 00:40:08.891852 sshd[3175]: Accepted publickey for core from 10.0.0.1 port 39884 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:08.893465 sshd[3175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:08.898882 systemd-logind[1417]: New session 6 of user core. May 8 00:40:08.907541 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:40:09.033021 sshd[3175]: pam_unix(sshd:session): session closed for user core May 8 00:40:09.036495 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:39884.service: Deactivated successfully. May 8 00:40:09.038076 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:40:09.038846 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. May 8 00:40:09.039809 systemd-logind[1417]: Removed session 6. May 8 00:40:13.494116 kubelet[2493]: E0508 00:40:13.494081 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:13.494542 kubelet[2493]: E0508 00:40:13.494466 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:13.494579 containerd[1430]: time="2025-05-08T00:40:13.494462979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgdpt,Uid:9078ed78-e20c-4901-a92a-b0964ddf9115,Namespace:kube-system,Attempt:0,}" May 8 00:40:13.494816 containerd[1430]: time="2025-05-08T00:40:13.494763768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wjw6q,Uid:ca332cdd-caf1-40d8-b908-d8dba110b3bd,Namespace:kube-system,Attempt:0,}" May 8 00:40:13.559545 systemd-networkd[1372]: cni0: Link UP May 8 00:40:13.559549 systemd-networkd[1372]: cni0: Gained carrier May 8 00:40:13.562843 systemd-networkd[1372]: cni0: Lost carrier May 8 00:40:13.574253 systemd-networkd[1372]: veth7a7214f4: Link UP May 8 00:40:13.576201 kernel: cni0: port 1(veth7a7214f4) entered blocking state May 8 00:40:13.576257 kernel: cni0: port 1(veth7a7214f4) entered disabled state May 8 00:40:13.577894 kernel: veth7a7214f4: entered allmulticast mode May 8 00:40:13.577974 kernel: veth7a7214f4: entered promiscuous mode May 8 00:40:13.580222 kernel: cni0: port 1(veth7a7214f4) entered blocking state May 8 00:40:13.580275 kernel: cni0: port 1(veth7a7214f4) entered forwarding state May 8 00:40:13.584147 kernel: cni0: port 1(veth7a7214f4) entered disabled state May 8 00:40:13.588001 kernel: cni0: port 2(veth4021a679) entered blocking state May 8 00:40:13.588078 kernel: cni0: port 2(veth4021a679) entered disabled state May 8 00:40:13.588103 kernel: veth4021a679: entered allmulticast mode May 8 00:40:13.588117 kernel: veth4021a679: entered promiscuous mode May 8 00:40:13.588361 kernel: cni0: port 2(veth4021a679) entered blocking state May 8 00:40:13.586451 systemd-networkd[1372]: veth4021a679: Link UP May 8 00:40:13.591212 kernel: cni0: port 2(veth4021a679) entered forwarding state May 8 00:40:13.591612 kernel: cni0: port 2(veth4021a679) entered disabled state May 8 00:40:13.592391 systemd-networkd[1372]: cni0: Gained carrier May 8 00:40:13.597350 kernel: cni0: port 2(veth4021a679) entered blocking state May 8 00:40:13.597429 kernel: cni0: port 2(veth4021a679) entered forwarding state May 8 00:40:13.597457 systemd-networkd[1372]: veth4021a679: Gained carrier May 8 00:40:13.601017 kernel: cni0: port 1(veth7a7214f4) entered blocking state May 8 00:40:13.601418 kernel: cni0: port 1(veth7a7214f4) entered forwarding state May 8 00:40:13.600924 systemd-networkd[1372]: veth7a7214f4: Gained carrier May 8 00:40:13.601488 containerd[1430]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} May 8 00:40:13.601488 containerd[1430]: delegateAdd: netconf sent to delegate plugin: May 8 00:40:13.604420 containerd[1430]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 8 00:40:13.604420 containerd[1430]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400009e8e8), "name":"cbr0", "type":"bridge"} May 8 00:40:13.604420 containerd[1430]: delegateAdd: netconf sent to delegate plugin: May 8 00:40:13.619580 containerd[1430]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-08T00:40:13.619501362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:13.619580 containerd[1430]: time="2025-05-08T00:40:13.619558912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:13.619580 containerd[1430]: time="2025-05-08T00:40:13.619574710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:13.619736 containerd[1430]: time="2025-05-08T00:40:13.619649657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:13.620736 containerd[1430]: time="2025-05-08T00:40:13.620674124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:13.621262 containerd[1430]: time="2025-05-08T00:40:13.620719117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:13.621262 containerd[1430]: time="2025-05-08T00:40:13.621136446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:13.621262 containerd[1430]: time="2025-05-08T00:40:13.621227871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:13.639494 systemd[1]: Started cri-containerd-7be8a8f6ac3bd7d17ff2d91825e26062164046f5854c465eff8a7f7a07c9cda3.scope - libcontainer container 7be8a8f6ac3bd7d17ff2d91825e26062164046f5854c465eff8a7f7a07c9cda3. May 8 00:40:13.644547 systemd[1]: Started cri-containerd-d43663dfa9e6cdce8da1dee68c70c561c4d194330df4dd15b0b9bb4b1c45701b.scope - libcontainer container d43663dfa9e6cdce8da1dee68c70c561c4d194330df4dd15b0b9bb4b1c45701b. May 8 00:40:13.658563 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:13.662313 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:13.679771 containerd[1430]: time="2025-05-08T00:40:13.679732320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wjw6q,Uid:ca332cdd-caf1-40d8-b908-d8dba110b3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7be8a8f6ac3bd7d17ff2d91825e26062164046f5854c465eff8a7f7a07c9cda3\"" May 8 00:40:13.680606 containerd[1430]: time="2025-05-08T00:40:13.680578857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgdpt,Uid:9078ed78-e20c-4901-a92a-b0964ddf9115,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43663dfa9e6cdce8da1dee68c70c561c4d194330df4dd15b0b9bb4b1c45701b\"" May 8 00:40:13.680874 kubelet[2493]: E0508 00:40:13.680851 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:13.681369 kubelet[2493]: E0508 00:40:13.681348 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:13.684486 containerd[1430]: time="2025-05-08T00:40:13.684453363Z" level=info msg="CreateContainer within sandbox \"7be8a8f6ac3bd7d17ff2d91825e26062164046f5854c465eff8a7f7a07c9cda3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:13.684798 containerd[1430]: time="2025-05-08T00:40:13.684609857Z" level=info msg="CreateContainer within sandbox \"d43663dfa9e6cdce8da1dee68c70c561c4d194330df4dd15b0b9bb4b1c45701b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:13.706768 containerd[1430]: time="2025-05-08T00:40:13.706730484Z" level=info msg="CreateContainer within sandbox \"d43663dfa9e6cdce8da1dee68c70c561c4d194330df4dd15b0b9bb4b1c45701b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7857ce3b1c246442fef9ae68f4fa5f745430d76c79c58ad3f106e3dc2c567a6\"" May 8 00:40:13.708045 containerd[1430]: time="2025-05-08T00:40:13.707947039Z" level=info msg="StartContainer for \"c7857ce3b1c246442fef9ae68f4fa5f745430d76c79c58ad3f106e3dc2c567a6\"" May 8 00:40:13.722715 containerd[1430]: time="2025-05-08T00:40:13.722679033Z" level=info msg="CreateContainer within sandbox \"7be8a8f6ac3bd7d17ff2d91825e26062164046f5854c465eff8a7f7a07c9cda3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed9a4a9927d4e6ea9f7f63228b4fc3e4488a3ff98d8b95d67865a627e615df3d\"" May 8 00:40:13.724495 containerd[1430]: time="2025-05-08T00:40:13.723248417Z" level=info msg="StartContainer for \"ed9a4a9927d4e6ea9f7f63228b4fc3e4488a3ff98d8b95d67865a627e615df3d\"" May 8 00:40:13.731636 systemd[1]: Started cri-containerd-c7857ce3b1c246442fef9ae68f4fa5f745430d76c79c58ad3f106e3dc2c567a6.scope - libcontainer container c7857ce3b1c246442fef9ae68f4fa5f745430d76c79c58ad3f106e3dc2c567a6. May 8 00:40:13.749479 systemd[1]: Started cri-containerd-ed9a4a9927d4e6ea9f7f63228b4fc3e4488a3ff98d8b95d67865a627e615df3d.scope - libcontainer container ed9a4a9927d4e6ea9f7f63228b4fc3e4488a3ff98d8b95d67865a627e615df3d. May 8 00:40:13.759511 containerd[1430]: time="2025-05-08T00:40:13.759309013Z" level=info msg="StartContainer for \"c7857ce3b1c246442fef9ae68f4fa5f745430d76c79c58ad3f106e3dc2c567a6\" returns successfully" May 8 00:40:13.777547 containerd[1430]: time="2025-05-08T00:40:13.777511702Z" level=info msg="StartContainer for \"ed9a4a9927d4e6ea9f7f63228b4fc3e4488a3ff98d8b95d67865a627e615df3d\" returns successfully" May 8 00:40:14.043025 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). May 8 00:40:14.082947 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:14.084256 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:14.087793 systemd-logind[1417]: New session 7 of user core. May 8 00:40:14.098484 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:40:14.208285 sshd[3446]: pam_unix(sshd:session): session closed for user core May 8 00:40:14.211349 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:59444.service: Deactivated successfully. May 8 00:40:14.212996 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:40:14.213621 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. May 8 00:40:14.214456 systemd-logind[1417]: Removed session 7. May 8 00:40:14.575029 kubelet[2493]: E0508 00:40:14.574943 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.577678 kubelet[2493]: E0508 00:40:14.577648 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.601354 kubelet[2493]: I0508 00:40:14.600715 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fgdpt" podStartSLOduration=17.600699335 podStartE2EDuration="17.600699335s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:14.587080565 +0000 UTC m=+32.194834075" watchObservedRunningTime="2025-05-08 00:40:14.600699335 +0000 UTC m=+32.208452845" May 8 00:40:14.612985 kubelet[2493]: I0508 00:40:14.612897 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wjw6q" podStartSLOduration=17.61276072 podStartE2EDuration="17.61276072s" podCreationTimestamp="2025-05-08 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:14.612642019 +0000 UTC m=+32.220395529" watchObservedRunningTime="2025-05-08 00:40:14.61276072 +0000 UTC m=+32.220514230" May 8 00:40:14.863499 systemd-networkd[1372]: veth4021a679: Gained IPv6LL May 8 00:40:15.439532 systemd-networkd[1372]: cni0: Gained IPv6LL May 8 00:40:15.567478 systemd-networkd[1372]: veth7a7214f4: Gained IPv6LL May 8 00:40:15.581976 kubelet[2493]: E0508 00:40:15.581800 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.581976 kubelet[2493]: E0508 00:40:15.581898 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:16.582815 kubelet[2493]: E0508 00:40:16.582784 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:16.585673 kubelet[2493]: E0508 00:40:16.585643 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:19.219778 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). May 8 00:40:19.252124 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:19.253421 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:19.257043 systemd-logind[1417]: New session 8 of user core. May 8 00:40:19.268476 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:40:19.375364 sshd[3491]: pam_unix(sshd:session): session closed for user core May 8 00:40:19.392988 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:59450.service: Deactivated successfully. May 8 00:40:19.395253 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:40:19.396832 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. May 8 00:40:19.405817 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:59460.service - OpenSSH per-connection server daemon (10.0.0.1:59460). May 8 00:40:19.406679 systemd-logind[1417]: Removed session 8. May 8 00:40:19.434266 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 59460 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:19.435598 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:19.439693 systemd-logind[1417]: New session 9 of user core. May 8 00:40:19.443458 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:40:19.585982 sshd[3506]: pam_unix(sshd:session): session closed for user core May 8 00:40:19.596045 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:59460.service: Deactivated successfully. May 8 00:40:19.599939 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:40:19.602057 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. May 8 00:40:19.610681 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:59474.service - OpenSSH per-connection server daemon (10.0.0.1:59474). May 8 00:40:19.611896 systemd-logind[1417]: Removed session 9. May 8 00:40:19.639604 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 59474 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:19.640907 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:19.645142 systemd-logind[1417]: New session 10 of user core. May 8 00:40:19.656473 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:40:19.764852 sshd[3518]: pam_unix(sshd:session): session closed for user core May 8 00:40:19.770096 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:59474.service: Deactivated successfully. May 8 00:40:19.772509 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:40:19.773206 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. May 8 00:40:19.774065 systemd-logind[1417]: Removed session 10. May 8 00:40:24.775123 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:56940.service - OpenSSH per-connection server daemon (10.0.0.1:56940). May 8 00:40:24.809883 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 56940 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:24.811239 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:24.814827 systemd-logind[1417]: New session 11 of user core. May 8 00:40:24.825550 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:40:24.949909 sshd[3554]: pam_unix(sshd:session): session closed for user core May 8 00:40:24.961093 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:56940.service: Deactivated successfully. May 8 00:40:24.963400 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:40:24.964835 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. May 8 00:40:24.979824 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). May 8 00:40:24.981663 systemd-logind[1417]: Removed session 11. May 8 00:40:25.018881 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:25.021090 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:25.027167 systemd-logind[1417]: New session 12 of user core. May 8 00:40:25.041544 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:40:25.233805 sshd[3569]: pam_unix(sshd:session): session closed for user core May 8 00:40:25.242990 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:56944.service: Deactivated successfully. May 8 00:40:25.244769 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:40:25.247346 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. May 8 00:40:25.256707 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:56952.service - OpenSSH per-connection server daemon (10.0.0.1:56952). May 8 00:40:25.257692 systemd-logind[1417]: Removed session 12. May 8 00:40:25.286945 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 56952 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:25.287955 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:25.292032 systemd-logind[1417]: New session 13 of user core. May 8 00:40:25.299572 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:40:26.532914 sshd[3581]: pam_unix(sshd:session): session closed for user core May 8 00:40:26.540210 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:56952.service: Deactivated successfully. May 8 00:40:26.542164 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:40:26.544290 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. May 8 00:40:26.555071 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:56964.service - OpenSSH per-connection server daemon (10.0.0.1:56964). May 8 00:40:26.556963 systemd-logind[1417]: Removed session 13. May 8 00:40:26.591338 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 56964 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:26.592777 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:26.596872 systemd-logind[1417]: New session 14 of user core. May 8 00:40:26.606605 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:40:26.824404 sshd[3604]: pam_unix(sshd:session): session closed for user core May 8 00:40:26.837048 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:56964.service: Deactivated successfully. May 8 00:40:26.839524 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:40:26.842287 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. May 8 00:40:26.847681 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). May 8 00:40:26.849155 systemd-logind[1417]: Removed session 14. May 8 00:40:26.877893 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:26.879436 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:26.884120 systemd-logind[1417]: New session 15 of user core. May 8 00:40:26.890527 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:40:26.998257 sshd[3617]: pam_unix(sshd:session): session closed for user core May 8 00:40:27.001570 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:56976.service: Deactivated successfully. May 8 00:40:27.003244 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:40:27.004791 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. May 8 00:40:27.005869 systemd-logind[1417]: Removed session 15. May 8 00:40:32.009047 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:56980.service - OpenSSH per-connection server daemon (10.0.0.1:56980). May 8 00:40:32.043212 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 56980 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:32.044653 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:32.050063 systemd-logind[1417]: New session 16 of user core. May 8 00:40:32.061525 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:40:32.163926 sshd[3658]: pam_unix(sshd:session): session closed for user core May 8 00:40:32.167266 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:56980.service: Deactivated successfully. May 8 00:40:32.169082 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:40:32.169699 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. May 8 00:40:32.170670 systemd-logind[1417]: Removed session 16. May 8 00:40:37.177729 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:48862.service - OpenSSH per-connection server daemon (10.0.0.1:48862). May 8 00:40:37.210134 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 48862 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:37.211347 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:37.214738 systemd-logind[1417]: New session 17 of user core. May 8 00:40:37.227451 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:40:37.330055 sshd[3693]: pam_unix(sshd:session): session closed for user core May 8 00:40:37.333254 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:48862.service: Deactivated successfully. May 8 00:40:37.334836 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:40:37.336243 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. May 8 00:40:37.337010 systemd-logind[1417]: Removed session 17. May 8 00:40:42.340800 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:48878.service - OpenSSH per-connection server daemon (10.0.0.1:48878). May 8 00:40:42.372489 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 48878 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:40:42.373581 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:42.377005 systemd-logind[1417]: New session 18 of user core. May 8 00:40:42.384451 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:40:42.489017 sshd[3728]: pam_unix(sshd:session): session closed for user core May 8 00:40:42.492531 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:48878.service: Deactivated successfully. May 8 00:40:42.494157 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:40:42.495853 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. May 8 00:40:42.497385 systemd-logind[1417]: Removed session 18.