Jul 15 23:11:10.796854 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 23:11:10.796875 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:11:10.796885 kernel: KASLR enabled Jul 15 23:11:10.796891 kernel: efi: EFI v2.7 by EDK II Jul 15 23:11:10.796896 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Jul 15 23:11:10.796901 kernel: random: crng init done Jul 15 23:11:10.796908 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 15 23:11:10.796914 kernel: secureboot: Secure boot enabled Jul 15 23:11:10.796920 kernel: ACPI: Early table checksum verification disabled Jul 15 23:11:10.796927 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 15 23:11:10.796933 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 23:11:10.796939 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796945 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796951 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796958 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796965 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796971 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796978 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796984 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796990 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:11:10.796995 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 23:11:10.797001 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:11:10.797007 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:11:10.797013 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 15 23:11:10.797019 kernel: Zone ranges: Jul 15 23:11:10.797041 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:11:10.797047 kernel: DMA32 empty Jul 15 23:11:10.797053 kernel: Normal empty Jul 15 23:11:10.797059 kernel: Device empty Jul 15 23:11:10.797065 kernel: Movable zone start for each node Jul 15 23:11:10.797070 kernel: Early memory node ranges Jul 15 23:11:10.797076 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 15 23:11:10.797082 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 15 23:11:10.797088 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 15 23:11:10.797094 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 15 23:11:10.797100 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 15 23:11:10.797106 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 15 23:11:10.797114 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 15 23:11:10.797120 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 15 23:11:10.797126 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 23:11:10.797135 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:11:10.797142 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 23:11:10.797148 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 15 23:11:10.797155 kernel: psci: probing for conduit method from ACPI. Jul 15 23:11:10.797162 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:11:10.797169 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:11:10.797175 kernel: psci: Trusted OS migration not required Jul 15 23:11:10.797182 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:11:10.797188 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 23:11:10.797194 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:11:10.797201 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:11:10.797207 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 23:11:10.797214 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:11:10.797222 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:11:10.797228 kernel: CPU features: detected: Spectre-v4 Jul 15 23:11:10.797234 kernel: CPU features: detected: Spectre-BHB Jul 15 23:11:10.797241 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:11:10.797247 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:11:10.797253 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 23:11:10.797260 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:11:10.797266 kernel: alternatives: applying boot alternatives Jul 15 23:11:10.797273 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:11:10.797280 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:11:10.797286 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:11:10.797294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:11:10.797301 kernel: Fallback order for Node 0: 0 Jul 15 23:11:10.797307 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 23:11:10.797313 kernel: Policy zone: DMA Jul 15 23:11:10.797320 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:11:10.797326 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 23:11:10.797332 kernel: software IO TLB: area num 4. Jul 15 23:11:10.797338 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 23:11:10.797345 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 15 23:11:10.797352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:11:10.797358 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:11:10.797366 kernel: rcu: RCU event tracing is enabled. Jul 15 23:11:10.797374 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:11:10.797381 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:11:10.797387 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:11:10.797394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:11:10.797400 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:11:10.797407 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:11:10.797414 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:11:10.797420 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:11:10.797427 kernel: GICv3: 256 SPIs implemented Jul 15 23:11:10.797434 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:11:10.797440 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:11:10.797449 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 23:11:10.797456 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:11:10.797462 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 23:11:10.797469 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 23:11:10.797475 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:11:10.797482 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:11:10.797488 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 23:11:10.797495 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 23:11:10.797501 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:11:10.797508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:11:10.797514 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 23:11:10.797521 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 23:11:10.797529 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 23:11:10.797536 kernel: arm-pv: using stolen time PV Jul 15 23:11:10.797544 kernel: Console: colour dummy device 80x25 Jul 15 23:11:10.797551 kernel: ACPI: Core revision 20240827 Jul 15 23:11:10.797558 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 23:11:10.797565 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:11:10.797571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:11:10.797578 kernel: landlock: Up and running. Jul 15 23:11:10.797585 kernel: SELinux: Initializing. Jul 15 23:11:10.797593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:11:10.797600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:11:10.797606 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:11:10.797613 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:11:10.797620 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:11:10.797626 kernel: Remapping and enabling EFI services. Jul 15 23:11:10.797633 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:11:10.797640 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:11:10.797647 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 23:11:10.797655 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 23:11:10.797666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:11:10.797673 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 23:11:10.797681 kernel: Detected PIPT I-cache on CPU2 Jul 15 23:11:10.797688 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 23:11:10.797695 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 23:11:10.797702 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:11:10.797709 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 23:11:10.797716 kernel: Detected PIPT I-cache on CPU3 Jul 15 23:11:10.797724 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 23:11:10.797731 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 23:11:10.797738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:11:10.797745 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 23:11:10.797752 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:11:10.797765 kernel: SMP: Total of 4 processors activated. Jul 15 23:11:10.797773 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:11:10.797780 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:11:10.797787 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:11:10.797796 kernel: CPU features: detected: Common not Private translations Jul 15 23:11:10.797803 kernel: CPU features: detected: CRC32 instructions Jul 15 23:11:10.797810 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 23:11:10.797817 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:11:10.797824 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:11:10.797831 kernel: CPU features: detected: Privileged Access Never Jul 15 23:11:10.797839 kernel: CPU features: detected: RAS Extension Support Jul 15 23:11:10.797846 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:11:10.797853 kernel: alternatives: applying system-wide alternatives Jul 15 23:11:10.797862 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 23:11:10.797870 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 15 23:11:10.797877 kernel: devtmpfs: initialized Jul 15 23:11:10.797884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:11:10.797892 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:11:10.797899 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:11:10.797906 kernel: 0 pages in range for non-PLT usage Jul 15 23:11:10.797914 kernel: 508432 pages in range for PLT usage Jul 15 23:11:10.797921 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:11:10.797929 kernel: SMBIOS 3.0.0 present. Jul 15 23:11:10.797936 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 23:11:10.797943 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:11:10.797951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:11:10.797958 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:11:10.797965 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:11:10.797972 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:11:10.797979 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:11:10.797986 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Jul 15 23:11:10.797994 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:11:10.798002 kernel: cpuidle: using governor menu Jul 15 23:11:10.798009 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:11:10.798016 kernel: ASID allocator initialised with 32768 entries Jul 15 23:11:10.798041 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:11:10.798049 kernel: Serial: AMBA PL011 UART driver Jul 15 23:11:10.798056 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:11:10.798063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:11:10.798070 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:11:10.798079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:11:10.798086 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:11:10.798093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:11:10.798100 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:11:10.798107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:11:10.798113 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:11:10.798120 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:11:10.798127 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:11:10.798134 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:11:10.798142 kernel: ACPI: Interpreter enabled Jul 15 23:11:10.798149 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:11:10.798156 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:11:10.798163 kernel: ACPI: CPU0 has been hot-added Jul 15 23:11:10.798169 kernel: ACPI: CPU1 has been hot-added Jul 15 23:11:10.798177 kernel: ACPI: CPU2 has been hot-added Jul 15 23:11:10.798183 kernel: ACPI: CPU3 has been hot-added Jul 15 23:11:10.798190 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:11:10.798197 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:11:10.798205 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:11:10.798344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:11:10.798409 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:11:10.798466 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:11:10.798521 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 23:11:10.798577 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 23:11:10.798586 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 23:11:10.798596 kernel: PCI host bridge to bus 0000:00 Jul 15 23:11:10.798659 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 23:11:10.798712 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:11:10.798775 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 23:11:10.798831 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:11:10.798908 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:11:10.798976 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:11:10.799128 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 23:11:10.799197 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 23:11:10.799257 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 23:11:10.799317 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 23:11:10.799377 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 23:11:10.799436 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 23:11:10.799496 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 23:11:10.799549 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:11:10.799605 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 23:11:10.799615 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:11:10.799622 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:11:10.799644 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:11:10.799651 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:11:10.799659 kernel: iommu: Default domain type: Translated Jul 15 23:11:10.799668 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:11:10.799675 kernel: efivars: Registered efivars operations Jul 15 23:11:10.799683 kernel: vgaarb: loaded Jul 15 23:11:10.799691 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:11:10.799698 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:11:10.799706 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:11:10.799713 kernel: pnp: PnP ACPI init Jul 15 23:11:10.799791 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 23:11:10.799802 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:11:10.799812 kernel: NET: Registered PF_INET protocol family Jul 15 23:11:10.799819 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:11:10.799826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:11:10.799833 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:11:10.799840 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:11:10.799847 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:11:10.799854 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:11:10.799861 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:11:10.799868 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:11:10.799877 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:11:10.799883 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:11:10.799890 kernel: kvm [1]: HYP mode not available Jul 15 23:11:10.799897 kernel: Initialise system trusted keyrings Jul 15 23:11:10.799904 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:11:10.799911 kernel: Key type asymmetric registered Jul 15 23:11:10.799918 kernel: Asymmetric key parser 'x509' registered Jul 15 23:11:10.799925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:11:10.799932 kernel: io scheduler mq-deadline registered Jul 15 23:11:10.799941 kernel: io scheduler kyber registered Jul 15 23:11:10.799948 kernel: io scheduler bfq registered Jul 15 23:11:10.799955 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:11:10.799962 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:11:10.799969 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:11:10.800044 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 23:11:10.800056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:11:10.800064 kernel: thunder_xcv, ver 1.0 Jul 15 23:11:10.800071 kernel: thunder_bgx, ver 1.0 Jul 15 23:11:10.800080 kernel: nicpf, ver 1.0 Jul 15 23:11:10.800087 kernel: nicvf, ver 1.0 Jul 15 23:11:10.800164 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:11:10.800221 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:11:10 UTC (1752621070) Jul 15 23:11:10.800230 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:11:10.800238 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:11:10.800245 kernel: watchdog: NMI not fully supported Jul 15 23:11:10.800252 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:11:10.800261 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:11:10.800268 kernel: Segment Routing with IPv6 Jul 15 23:11:10.800275 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:11:10.800282 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:11:10.800288 kernel: Key type dns_resolver registered Jul 15 23:11:10.800295 kernel: registered taskstats version 1 Jul 15 23:11:10.800302 kernel: Loading compiled-in X.509 certificates Jul 15 23:11:10.800309 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:11:10.800316 kernel: Demotion targets for Node 0: null Jul 15 23:11:10.800324 kernel: Key type .fscrypt registered Jul 15 23:11:10.800332 kernel: Key type fscrypt-provisioning registered Jul 15 23:11:10.800339 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:11:10.800346 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:11:10.800352 kernel: ima: No architecture policies found Jul 15 23:11:10.800359 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:11:10.800366 kernel: clk: Disabling unused clocks Jul 15 23:11:10.800373 kernel: PM: genpd: Disabling unused power domains Jul 15 23:11:10.800380 kernel: Warning: unable to open an initial console. Jul 15 23:11:10.800389 kernel: Freeing unused kernel memory: 39488K Jul 15 23:11:10.800396 kernel: Run /init as init process Jul 15 23:11:10.800402 kernel: with arguments: Jul 15 23:11:10.800409 kernel: /init Jul 15 23:11:10.800416 kernel: with environment: Jul 15 23:11:10.800423 kernel: HOME=/ Jul 15 23:11:10.800430 kernel: TERM=linux Jul 15 23:11:10.800437 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:11:10.800444 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:11:10.800456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:11:10.800464 systemd[1]: Detected virtualization kvm. Jul 15 23:11:10.800471 systemd[1]: Detected architecture arm64. Jul 15 23:11:10.800478 systemd[1]: Running in initrd. Jul 15 23:11:10.800485 systemd[1]: No hostname configured, using default hostname. Jul 15 23:11:10.800493 systemd[1]: Hostname set to . Jul 15 23:11:10.800500 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:11:10.800509 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:11:10.800516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:10.800524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:10.800532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:11:10.800539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:11:10.800546 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:11:10.800555 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:11:10.800565 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:11:10.800572 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:11:10.800580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:10.800587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:10.800595 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:11:10.800602 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:11:10.800610 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:11:10.800617 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:11:10.800626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:11:10.800633 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:11:10.800641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:11:10.800649 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:11:10.800656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:10.800667 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:10.800675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:10.800683 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:11:10.800690 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:11:10.800699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:11:10.800707 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:11:10.800715 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:11:10.800722 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:11:10.800729 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:11:10.800737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:11:10.800745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:10.800752 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:11:10.800770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:10.800779 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:11:10.800787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:11:10.800814 systemd-journald[243]: Collecting audit messages is disabled. Jul 15 23:11:10.800835 systemd-journald[243]: Journal started Jul 15 23:11:10.800854 systemd-journald[243]: Runtime Journal (/run/log/journal/d953fbbb1aa24a2082b82008d7ff2a81) is 6M, max 48.5M, 42.4M free. Jul 15 23:11:10.791297 systemd-modules-load[247]: Inserted module 'overlay' Jul 15 23:11:10.805694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:10.807419 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:11:10.809047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:11:10.812058 kernel: Bridge firewalling registered Jul 15 23:11:10.810266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:11:10.811631 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 15 23:11:10.811894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:11:10.812988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:10.815004 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:11:10.819399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:11:10.822174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:11:10.822665 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:11:10.827372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:10.832107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:10.835964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:11:10.837187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:10.846157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:11:10.848099 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:11:10.865036 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:11:10.880474 systemd-resolved[284]: Positive Trust Anchors: Jul 15 23:11:10.880490 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:11:10.880522 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:11:10.885300 systemd-resolved[284]: Defaulting to hostname 'linux'. Jul 15 23:11:10.886464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:11:10.889057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:10.962481 kernel: SCSI subsystem initialized Jul 15 23:11:10.968040 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:11:10.976066 kernel: iscsi: registered transport (tcp) Jul 15 23:11:10.989044 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:11:10.989073 kernel: QLogic iSCSI HBA Driver Jul 15 23:11:11.006775 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:11:11.026818 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:11.029513 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:11:11.081013 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:11:11.083349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:11:11.148049 kernel: raid6: neonx8 gen() 15723 MB/s Jul 15 23:11:11.165035 kernel: raid6: neonx4 gen() 15735 MB/s Jul 15 23:11:11.182033 kernel: raid6: neonx2 gen() 13195 MB/s Jul 15 23:11:11.199035 kernel: raid6: neonx1 gen() 10319 MB/s Jul 15 23:11:11.216037 kernel: raid6: int64x8 gen() 6902 MB/s Jul 15 23:11:11.233034 kernel: raid6: int64x4 gen() 7344 MB/s Jul 15 23:11:11.250038 kernel: raid6: int64x2 gen() 5958 MB/s Jul 15 23:11:11.267039 kernel: raid6: int64x1 gen() 5030 MB/s Jul 15 23:11:11.267055 kernel: raid6: using algorithm neonx4 gen() 15735 MB/s Jul 15 23:11:11.284082 kernel: raid6: .... xor() 12247 MB/s, rmw enabled Jul 15 23:11:11.284098 kernel: raid6: using neon recovery algorithm Jul 15 23:11:11.289353 kernel: xor: measuring software checksum speed Jul 15 23:11:11.289375 kernel: 8regs : 21590 MB/sec Jul 15 23:11:11.290486 kernel: 32regs : 21693 MB/sec Jul 15 23:11:11.290502 kernel: arm64_neon : 28196 MB/sec Jul 15 23:11:11.290510 kernel: xor: using function: arm64_neon (28196 MB/sec) Jul 15 23:11:11.349053 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:11:11.355095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:11:11.357195 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:11.386194 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jul 15 23:11:11.390259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:11.391926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:11:11.416302 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Jul 15 23:11:11.441000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:11:11.445146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:11:11.502724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:11.505612 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:11:11.549234 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 23:11:11.549383 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:11:11.555144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:11.558061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:11:11.558091 kernel: GPT:9289727 != 19775487 Jul 15 23:11:11.558102 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:11:11.558111 kernel: GPT:9289727 != 19775487 Jul 15 23:11:11.558119 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:11:11.558127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:11:11.555256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:11.560149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:11.562317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:11.577552 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:11:11.588414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:11:11.590158 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:11.603801 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:11:11.611180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:11:11.618307 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:11:11.620785 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:11:11.621794 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:11:11.623518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:11.625126 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:11:11.627405 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:11:11.628949 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:11:11.649662 disk-uuid[588]: Primary Header is updated. Jul 15 23:11:11.649662 disk-uuid[588]: Secondary Entries is updated. Jul 15 23:11:11.649662 disk-uuid[588]: Secondary Header is updated. Jul 15 23:11:11.654867 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:11:11.657631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:11:12.663069 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:11:12.663129 disk-uuid[593]: The operation has completed successfully. Jul 15 23:11:12.693190 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:11:12.693294 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:11:12.717764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:11:12.748096 sh[609]: Success Jul 15 23:11:12.767104 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:11:12.767149 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:11:12.768044 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:11:12.787098 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:11:12.812464 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:11:12.815204 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:11:12.827150 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:11:12.834499 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:11:12.834528 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (622) Jul 15 23:11:12.835802 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:11:12.835834 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:12.837075 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:11:12.841551 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:11:12.842712 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:11:12.843821 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:11:12.844722 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:11:12.847110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:11:12.873205 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (651) Jul 15 23:11:12.875247 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:12.875286 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:12.875303 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:11:12.882047 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:12.883559 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:11:12.885661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:11:12.983913 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:11:12.987017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:11:13.046068 systemd-networkd[798]: lo: Link UP Jul 15 23:11:13.046079 systemd-networkd[798]: lo: Gained carrier Jul 15 23:11:13.046938 systemd-networkd[798]: Enumeration completed Jul 15 23:11:13.047080 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:11:13.047523 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:13.047527 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:11:13.048372 systemd[1]: Reached target network.target - Network. Jul 15 23:11:13.050196 systemd-networkd[798]: eth0: Link UP Jul 15 23:11:13.050199 systemd-networkd[798]: eth0: Gained carrier Jul 15 23:11:13.050208 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:13.079118 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:11:13.155827 ignition[696]: Ignition 2.21.0 Jul 15 23:11:13.155841 ignition[696]: Stage: fetch-offline Jul 15 23:11:13.155882 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:13.155890 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:13.156111 ignition[696]: parsed url from cmdline: "" Jul 15 23:11:13.156115 ignition[696]: no config URL provided Jul 15 23:11:13.156120 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:11:13.156127 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:11:13.156160 ignition[696]: op(1): [started] loading QEMU firmware config module Jul 15 23:11:13.156164 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:11:13.164874 ignition[696]: op(1): [finished] loading QEMU firmware config module Jul 15 23:11:13.185881 ignition[696]: parsing config with SHA512: f3bf6004d7a651b6c83a454ade54ec048fa27f5b9cacb8b887a8bb47414fa07a99ba3a36d04dba5d8d2521157decbd912e58d3613c507eb66d2e2f6b31fda3bd Jul 15 23:11:13.191080 unknown[696]: fetched base config from "system" Jul 15 23:11:13.191090 unknown[696]: fetched user config from "qemu" Jul 15 23:11:13.191534 ignition[696]: fetch-offline: fetch-offline passed Jul 15 23:11:13.191592 ignition[696]: Ignition finished successfully Jul 15 23:11:13.194433 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:11:13.195536 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:11:13.196376 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:11:13.237237 ignition[812]: Ignition 2.21.0 Jul 15 23:11:13.237256 ignition[812]: Stage: kargs Jul 15 23:11:13.237391 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:13.237402 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:13.238102 ignition[812]: kargs: kargs passed Jul 15 23:11:13.240418 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:11:13.238147 ignition[812]: Ignition finished successfully Jul 15 23:11:13.243136 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:11:13.268516 ignition[820]: Ignition 2.21.0 Jul 15 23:11:13.268533 ignition[820]: Stage: disks Jul 15 23:11:13.268696 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:13.268705 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:13.271977 ignition[820]: disks: disks passed Jul 15 23:11:13.272053 ignition[820]: Ignition finished successfully Jul 15 23:11:13.274054 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:11:13.274991 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:11:13.276165 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:11:13.277680 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:11:13.279127 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:11:13.280521 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:11:13.282244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:11:13.322319 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:11:13.330300 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:11:13.333354 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:11:13.430066 kernel: EXT4-fs (vda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:11:13.430433 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:11:13.431551 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:11:13.436499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:11:13.450304 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:11:13.451338 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:11:13.451406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:11:13.451436 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:11:13.465312 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (839) Jul 15 23:11:13.465345 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:13.465380 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:13.465393 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:11:13.458916 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:11:13.465253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:11:13.468063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:11:13.541139 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:11:13.544472 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:11:13.548009 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:11:13.552821 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:11:13.637055 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:11:13.638795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:11:13.640201 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:11:13.663064 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:13.685401 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:11:13.690049 ignition[953]: INFO : Ignition 2.21.0 Jul 15 23:11:13.690049 ignition[953]: INFO : Stage: mount Jul 15 23:11:13.691330 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:13.691330 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:13.692813 ignition[953]: INFO : mount: mount passed Jul 15 23:11:13.692813 ignition[953]: INFO : Ignition finished successfully Jul 15 23:11:13.693842 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:11:13.695781 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:11:13.833785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:11:13.835278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:11:13.868066 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 15 23:11:13.870469 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:11:13.870497 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:11:13.870509 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:11:13.873044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:11:13.902389 ignition[983]: INFO : Ignition 2.21.0 Jul 15 23:11:13.902389 ignition[983]: INFO : Stage: files Jul 15 23:11:13.904524 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:13.904524 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:13.906077 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:11:13.906979 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:11:13.906979 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:11:13.909191 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:11:13.910346 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:11:13.910346 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:11:13.909766 unknown[983]: wrote ssh authorized keys file for user: core Jul 15 23:11:13.913162 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:11:13.913162 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 23:11:13.965934 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:11:14.092630 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:11:14.092630 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:11:14.092630 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:11:14.092630 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:11:14.098578 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 23:11:14.336224 systemd-networkd[798]: eth0: Gained IPv6LL Jul 15 23:11:14.615406 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:11:15.099768 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:11:15.099768 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 23:11:15.103663 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:11:15.120212 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:11:15.123545 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:11:15.126270 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:11:15.126270 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:11:15.126270 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:11:15.126270 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:11:15.126270 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:11:15.126270 ignition[983]: INFO : files: files passed Jul 15 23:11:15.126270 ignition[983]: INFO : Ignition finished successfully Jul 15 23:11:15.126676 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:11:15.129012 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:11:15.131533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:11:15.143500 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:11:15.143999 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:11:15.145505 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:11:15.147940 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:15.147940 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:15.150479 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:11:15.150830 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:11:15.152788 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:11:15.154728 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:11:15.200047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:11:15.200179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:11:15.201797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:11:15.203195 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:11:15.204491 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:11:15.205284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:11:15.228968 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:11:15.231210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:11:15.251188 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:15.252855 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:15.253805 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:11:15.255086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:11:15.255215 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:11:15.257057 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:11:15.258514 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:11:15.259693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:11:15.260917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:11:15.262342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:11:15.263785 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:11:15.265157 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:11:15.266490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:11:15.267907 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:11:15.269320 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:11:15.270575 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:11:15.271819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:11:15.271951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:11:15.273602 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:15.274927 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:15.276291 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:11:15.277637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:15.278606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:11:15.278722 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:11:15.280729 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:11:15.280856 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:11:15.282303 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:11:15.283416 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:11:15.284067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:15.285009 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:11:15.286101 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:11:15.287427 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:11:15.287510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:11:15.288949 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:11:15.289032 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:11:15.290151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:11:15.290262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:11:15.291553 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:11:15.291656 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:11:15.293576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:11:15.294349 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:11:15.294469 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:15.296603 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:11:15.297934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:11:15.298053 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:15.299369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:11:15.299471 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:11:15.304161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:11:15.308172 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:11:15.316405 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:11:15.322429 ignition[1040]: INFO : Ignition 2.21.0 Jul 15 23:11:15.322429 ignition[1040]: INFO : Stage: umount Jul 15 23:11:15.324925 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:11:15.324925 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:11:15.324925 ignition[1040]: INFO : umount: umount passed Jul 15 23:11:15.324925 ignition[1040]: INFO : Ignition finished successfully Jul 15 23:11:15.326554 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:11:15.326649 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:11:15.327957 systemd[1]: Stopped target network.target - Network. Jul 15 23:11:15.329206 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:11:15.329266 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:11:15.330465 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:11:15.330502 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:11:15.332680 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:11:15.332731 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:11:15.333541 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:11:15.333577 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:11:15.334454 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:11:15.342136 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:11:15.346222 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:11:15.348088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:11:15.350321 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:11:15.350520 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:11:15.350599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:11:15.354045 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:11:15.354795 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:11:15.356327 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:11:15.356363 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:15.358619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:11:15.359760 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:11:15.359818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:11:15.361433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:11:15.361487 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:15.364395 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:11:15.364439 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:15.365738 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:11:15.365785 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:15.367859 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:15.372588 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:11:15.372660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:15.386287 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:11:15.387080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:11:15.388150 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:11:15.388278 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:15.389982 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:11:15.390067 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:11:15.393112 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:11:15.393246 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:15.395605 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:11:15.395647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:15.397012 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:11:15.397068 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:11:15.399079 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:11:15.399123 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:11:15.400966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:11:15.401010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:11:15.403121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:11:15.403168 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:11:15.405491 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:11:15.406784 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:11:15.406832 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:15.409145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:11:15.409185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:15.411753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:11:15.411792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:15.415242 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:11:15.415298 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:11:15.415330 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:11:15.422995 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:11:15.423113 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:11:15.424713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:11:15.426568 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:11:15.455088 systemd[1]: Switching root. Jul 15 23:11:15.493899 systemd-journald[243]: Journal stopped Jul 15 23:11:16.363248 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Jul 15 23:11:16.363302 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:11:16.363314 kernel: SELinux: policy capability open_perms=1 Jul 15 23:11:16.363327 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:11:16.363339 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:11:16.363349 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:11:16.363362 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:11:16.363371 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:11:16.363382 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:11:16.363392 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:11:16.363402 kernel: audit: type=1403 audit(1752621075.794:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:11:16.363415 systemd[1]: Successfully loaded SELinux policy in 47.697ms. Jul 15 23:11:16.363434 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.495ms. Jul 15 23:11:16.363445 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:11:16.363456 systemd[1]: Detected virtualization kvm. Jul 15 23:11:16.363467 systemd[1]: Detected architecture arm64. Jul 15 23:11:16.363477 systemd[1]: Detected first boot. Jul 15 23:11:16.363487 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:11:16.363497 zram_generator::config[1085]: No configuration found. Jul 15 23:11:16.363510 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:11:16.363520 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:11:16.363530 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:11:16.363540 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:11:16.363552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:11:16.363562 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:11:16.363571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:11:16.363581 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:11:16.363591 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:11:16.363601 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:11:16.363611 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:11:16.363623 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:11:16.363633 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:11:16.363645 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:11:16.363655 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:11:16.363665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:11:16.363675 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:11:16.363685 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:11:16.363695 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:11:16.363705 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:11:16.363715 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:11:16.363727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:11:16.363737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:11:16.363756 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:11:16.363768 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:11:16.363779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:11:16.363788 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:11:16.363798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:11:16.363809 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:11:16.363821 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:11:16.363831 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:11:16.363841 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:11:16.363851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:11:16.363862 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:11:16.363871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:11:16.363881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:11:16.363891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:11:16.363901 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:11:16.363911 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:11:16.363922 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:11:16.363931 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:11:16.363941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:11:16.363952 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:11:16.363962 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:11:16.363972 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:11:16.363982 systemd[1]: Reached target machines.target - Containers. Jul 15 23:11:16.363992 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:11:16.364003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:16.364014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:11:16.364035 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:11:16.364048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:16.364058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:11:16.364068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:16.364078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:11:16.364088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:16.364101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:11:16.364112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:11:16.364122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:11:16.364131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:11:16.364142 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:11:16.364152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:16.364163 kernel: fuse: init (API version 7.41) Jul 15 23:11:16.364172 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:11:16.364182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:11:16.364193 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:11:16.364204 kernel: ACPI: bus type drm_connector registered Jul 15 23:11:16.364213 kernel: loop: module loaded Jul 15 23:11:16.364222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:11:16.364232 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:11:16.364242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:11:16.364254 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:11:16.364265 systemd[1]: Stopped verity-setup.service. Jul 15 23:11:16.364275 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:11:16.364285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:11:16.364294 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:11:16.364304 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:11:16.364314 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:11:16.364323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:11:16.364360 systemd-journald[1161]: Collecting audit messages is disabled. Jul 15 23:11:16.364380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:11:16.364390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:11:16.364402 systemd-journald[1161]: Journal started Jul 15 23:11:16.364425 systemd-journald[1161]: Runtime Journal (/run/log/journal/d953fbbb1aa24a2082b82008d7ff2a81) is 6M, max 48.5M, 42.4M free. Jul 15 23:11:16.169483 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:11:16.187217 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:11:16.187606 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:11:16.367438 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:11:16.368819 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:11:16.369175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:11:16.370685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:16.370860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:16.371988 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:11:16.373201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:11:16.374519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:16.374699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:16.375885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:11:16.376075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:11:16.377420 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:16.377576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:16.379087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:11:16.380211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:11:16.381460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:11:16.382871 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:11:16.394401 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:11:16.396661 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:11:16.398476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:11:16.399306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:11:16.399333 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:11:16.400994 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:11:16.415874 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:11:16.417190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:16.421489 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:11:16.423273 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:11:16.424161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:11:16.427213 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:11:16.428160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:11:16.429226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:11:16.432060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:11:16.433563 systemd-journald[1161]: Time spent on flushing to /var/log/journal/d953fbbb1aa24a2082b82008d7ff2a81 is 14.570ms for 881 entries. Jul 15 23:11:16.433563 systemd-journald[1161]: System Journal (/var/log/journal/d953fbbb1aa24a2082b82008d7ff2a81) is 8M, max 195.6M, 187.6M free. Jul 15 23:11:16.457363 systemd-journald[1161]: Received client request to flush runtime journal. Jul 15 23:11:16.457398 kernel: loop0: detected capacity change from 0 to 207008 Jul 15 23:11:16.434191 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:11:16.439544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:11:16.440850 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:11:16.441917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:11:16.449765 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:11:16.450815 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:11:16.453280 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:11:16.459338 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:11:16.461015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:11:16.476148 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:11:16.487545 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:11:16.490678 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:11:16.493701 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:11:16.498047 kernel: loop1: detected capacity change from 0 to 138376 Jul 15 23:11:16.522897 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 15 23:11:16.522914 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 15 23:11:16.527665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:11:16.531128 kernel: loop2: detected capacity change from 0 to 107312 Jul 15 23:11:16.562072 kernel: loop3: detected capacity change from 0 to 207008 Jul 15 23:11:16.569068 kernel: loop4: detected capacity change from 0 to 138376 Jul 15 23:11:16.575042 kernel: loop5: detected capacity change from 0 to 107312 Jul 15 23:11:16.579279 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:11:16.579653 (sd-merge)[1226]: Merged extensions into '/usr'. Jul 15 23:11:16.583123 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:11:16.583234 systemd[1]: Reloading... Jul 15 23:11:16.648059 zram_generator::config[1251]: No configuration found. Jul 15 23:11:16.696541 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:11:16.728531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:16.792246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:11:16.792497 systemd[1]: Reloading finished in 208 ms. Jul 15 23:11:16.819739 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:11:16.821101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:11:16.836518 systemd[1]: Starting ensure-sysext.service... Jul 15 23:11:16.838424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:11:16.857676 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:11:16.857690 systemd[1]: Reloading... Jul 15 23:11:16.865869 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:11:16.865903 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:11:16.866182 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:11:16.866396 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:11:16.867120 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:11:16.867371 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 15 23:11:16.867417 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jul 15 23:11:16.894310 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:11:16.894323 systemd-tmpfiles[1287]: Skipping /boot Jul 15 23:11:16.903929 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:11:16.903943 systemd-tmpfiles[1287]: Skipping /boot Jul 15 23:11:16.911052 zram_generator::config[1317]: No configuration found. Jul 15 23:11:16.978531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:17.041057 systemd[1]: Reloading finished in 183 ms. Jul 15 23:11:17.062420 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:11:17.076567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:11:17.083750 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:11:17.086198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:11:17.111450 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:11:17.116283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:11:17.118676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:11:17.120664 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:11:17.125550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:17.128212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:17.136957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:17.142102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:17.143035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:17.143151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:17.144283 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:11:17.145840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:17.145995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:17.147393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:17.147580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:17.154440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:11:17.158254 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:11:17.161004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:11:17.163416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:11:17.165439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:17.165790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:17.168800 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 15 23:11:17.170649 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:11:17.176787 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:11:17.180154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:17.181698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:17.183767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:11:17.191636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:11:17.192438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:17.192553 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:17.192658 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:11:17.194896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:17.195073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:17.198268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:11:17.198414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:11:17.200253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:11:17.201918 augenrules[1391]: No rules Jul 15 23:11:17.202185 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:11:17.202342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:11:17.203584 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:11:17.203773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:11:17.210875 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:11:17.216088 systemd[1]: Finished ensure-sysext.service. Jul 15 23:11:17.233464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:11:17.237198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:11:17.240611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:11:17.242658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:11:17.242704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:11:17.245595 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:11:17.247111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:11:17.255509 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:11:17.256357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:11:17.256762 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:11:17.257073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:11:17.260480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:11:17.260681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:11:17.264008 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:11:17.264320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:11:17.333583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:11:17.337186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:11:17.338833 systemd-resolved[1353]: Positive Trust Anchors: Jul 15 23:11:17.338850 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:11:17.338881 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:11:17.347290 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 15 23:11:17.348733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:11:17.350250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:11:17.367637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:11:17.394072 systemd-networkd[1435]: lo: Link UP Jul 15 23:11:17.394085 systemd-networkd[1435]: lo: Gained carrier Jul 15 23:11:17.394908 systemd-networkd[1435]: Enumeration completed Jul 15 23:11:17.394999 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:11:17.395926 systemd[1]: Reached target network.target - Network. Jul 15 23:11:17.398258 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:11:17.399299 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:17.399309 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:11:17.400211 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:11:17.401603 systemd-networkd[1435]: eth0: Link UP Jul 15 23:11:17.401717 systemd-networkd[1435]: eth0: Gained carrier Jul 15 23:11:17.401731 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:11:17.405269 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:11:17.406291 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:11:17.407129 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:11:17.408075 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:11:17.408936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:11:17.411066 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:11:17.411091 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:11:17.411786 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:11:17.412661 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:11:17.414208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:11:17.415161 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:11:17.416696 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:11:17.418878 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:11:17.422309 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:11:17.423066 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:11:17.423396 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:11:17.424345 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:11:17.428731 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:11:17.429110 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Jul 15 23:11:17.430194 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:11:17.431843 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:11:17.432701 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:11:17.432766 systemd-timesyncd[1436]: Initial clock synchronization to Tue 2025-07-15 23:11:17.394065 UTC. Jul 15 23:11:17.432931 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:11:17.433655 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:11:17.434359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:11:17.434389 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:11:17.445288 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:11:17.447534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:11:17.449361 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:11:17.457261 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:11:17.460200 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:11:17.460942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:11:17.462063 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:11:17.463900 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:11:17.465636 jq[1473]: false Jul 15 23:11:17.466324 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:11:17.468338 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:11:17.472147 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:11:17.474182 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:11:17.474556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:11:17.475100 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:11:17.478237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:11:17.481826 extend-filesystems[1474]: Found /dev/vda6 Jul 15 23:11:17.485125 extend-filesystems[1474]: Found /dev/vda9 Jul 15 23:11:17.486489 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:11:17.488770 jq[1490]: true Jul 15 23:11:17.489073 extend-filesystems[1474]: Checking size of /dev/vda9 Jul 15 23:11:17.491092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:11:17.493398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:11:17.493569 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:11:17.493827 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:11:17.493978 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:11:17.496520 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:11:17.498084 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:11:17.527250 extend-filesystems[1474]: Resized partition /dev/vda9 Jul 15 23:11:17.527616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:11:17.528313 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:11:17.531423 update_engine[1485]: I20250715 23:11:17.530104 1485 main.cc:92] Flatcar Update Engine starting Jul 15 23:11:17.531630 jq[1499]: true Jul 15 23:11:17.542017 extend-filesystems[1515]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:11:17.544177 tar[1497]: linux-arm64/LICENSE Jul 15 23:11:17.544177 tar[1497]: linux-arm64/helm Jul 15 23:11:17.553577 dbus-daemon[1471]: [system] SELinux support is enabled Jul 15 23:11:17.554402 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:11:17.560035 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:11:17.560488 update_engine[1485]: I20250715 23:11:17.560333 1485 update_check_scheduler.cc:74] Next update check in 3m39s Jul 15 23:11:17.561364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:11:17.561400 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:11:17.562475 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:11:17.562497 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:11:17.565462 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:11:17.570729 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:11:17.579662 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:11:17.582653 systemd-logind[1483]: New seat seat0. Jul 15 23:11:17.586116 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:11:17.588767 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:11:17.598821 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:11:17.598821 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:11:17.598821 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:11:17.606013 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jul 15 23:11:17.600412 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:11:17.600896 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:11:17.635360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:11:17.646639 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:11:17.650878 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:11:17.652919 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:11:17.672522 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:11:17.764077 containerd[1512]: time="2025-07-15T23:11:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:11:17.764891 containerd[1512]: time="2025-07-15T23:11:17.764855960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:11:17.775939 containerd[1512]: time="2025-07-15T23:11:17.775894360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="49.48µs" Jul 15 23:11:17.775939 containerd[1512]: time="2025-07-15T23:11:17.775935360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:11:17.776028 containerd[1512]: time="2025-07-15T23:11:17.775954120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:11:17.776217 containerd[1512]: time="2025-07-15T23:11:17.776192480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:11:17.776241 containerd[1512]: time="2025-07-15T23:11:17.776219480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:11:17.776259 containerd[1512]: time="2025-07-15T23:11:17.776247720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776321 containerd[1512]: time="2025-07-15T23:11:17.776302360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776321 containerd[1512]: time="2025-07-15T23:11:17.776317560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776694 containerd[1512]: time="2025-07-15T23:11:17.776661200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776722 containerd[1512]: time="2025-07-15T23:11:17.776693120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776722 containerd[1512]: time="2025-07-15T23:11:17.776706720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776722 containerd[1512]: time="2025-07-15T23:11:17.776715000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:11:17.776821 containerd[1512]: time="2025-07-15T23:11:17.776802000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:11:17.777161 containerd[1512]: time="2025-07-15T23:11:17.777080200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:11:17.777199 containerd[1512]: time="2025-07-15T23:11:17.777181280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:11:17.777229 containerd[1512]: time="2025-07-15T23:11:17.777198080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:11:17.778035 containerd[1512]: time="2025-07-15T23:11:17.777990400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:11:17.778390 containerd[1512]: time="2025-07-15T23:11:17.778367280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:11:17.778469 containerd[1512]: time="2025-07-15T23:11:17.778453080Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:11:17.781800 containerd[1512]: time="2025-07-15T23:11:17.781763360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:11:17.781843 containerd[1512]: time="2025-07-15T23:11:17.781817720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:11:17.781843 containerd[1512]: time="2025-07-15T23:11:17.781838760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:11:17.781900 containerd[1512]: time="2025-07-15T23:11:17.781855640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:11:17.781900 containerd[1512]: time="2025-07-15T23:11:17.781874280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:11:17.781900 containerd[1512]: time="2025-07-15T23:11:17.781891520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:11:17.781949 containerd[1512]: time="2025-07-15T23:11:17.781904600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:11:17.781949 containerd[1512]: time="2025-07-15T23:11:17.781916400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:11:17.781949 containerd[1512]: time="2025-07-15T23:11:17.781927720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:11:17.781949 containerd[1512]: time="2025-07-15T23:11:17.781938000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:11:17.781949 containerd[1512]: time="2025-07-15T23:11:17.781947760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:11:17.782034 containerd[1512]: time="2025-07-15T23:11:17.781960760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:11:17.782225 containerd[1512]: time="2025-07-15T23:11:17.782198680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:11:17.782256 containerd[1512]: time="2025-07-15T23:11:17.782234000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:11:17.782274 containerd[1512]: time="2025-07-15T23:11:17.782255200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:11:17.782274 containerd[1512]: time="2025-07-15T23:11:17.782266440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:11:17.782305 containerd[1512]: time="2025-07-15T23:11:17.782276520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:11:17.782305 containerd[1512]: time="2025-07-15T23:11:17.782288520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:11:17.782305 containerd[1512]: time="2025-07-15T23:11:17.782300040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:11:17.782355 containerd[1512]: time="2025-07-15T23:11:17.782309880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:11:17.782355 containerd[1512]: time="2025-07-15T23:11:17.782321840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:11:17.782355 containerd[1512]: time="2025-07-15T23:11:17.782332760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:11:17.782355 containerd[1512]: time="2025-07-15T23:11:17.782343080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:11:17.783870 containerd[1512]: time="2025-07-15T23:11:17.783842760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:11:17.783897 containerd[1512]: time="2025-07-15T23:11:17.783887600Z" level=info msg="Start snapshots syncer" Jul 15 23:11:17.783932 containerd[1512]: time="2025-07-15T23:11:17.783917480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:11:17.784463 containerd[1512]: time="2025-07-15T23:11:17.784416120Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:11:17.784546 containerd[1512]: time="2025-07-15T23:11:17.784481560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:11:17.784576 containerd[1512]: time="2025-07-15T23:11:17.784564680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:11:17.784709 containerd[1512]: time="2025-07-15T23:11:17.784687080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:11:17.784733 containerd[1512]: time="2025-07-15T23:11:17.784715640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:11:17.784733 containerd[1512]: time="2025-07-15T23:11:17.784728360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:11:17.784774 containerd[1512]: time="2025-07-15T23:11:17.784753040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:11:17.784774 containerd[1512]: time="2025-07-15T23:11:17.784769720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:11:17.784807 containerd[1512]: time="2025-07-15T23:11:17.784781440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:11:17.784807 containerd[1512]: time="2025-07-15T23:11:17.784793240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:11:17.784848 containerd[1512]: time="2025-07-15T23:11:17.784822200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:11:17.784848 containerd[1512]: time="2025-07-15T23:11:17.784840840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:11:17.784880 containerd[1512]: time="2025-07-15T23:11:17.784852920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:11:17.784897 containerd[1512]: time="2025-07-15T23:11:17.784888240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:11:17.784914 containerd[1512]: time="2025-07-15T23:11:17.784904400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:11:17.784934 containerd[1512]: time="2025-07-15T23:11:17.784915160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:11:17.784934 containerd[1512]: time="2025-07-15T23:11:17.784924360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:11:17.784967 containerd[1512]: time="2025-07-15T23:11:17.784932760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:11:17.785059 containerd[1512]: time="2025-07-15T23:11:17.785041360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:11:17.785084 containerd[1512]: time="2025-07-15T23:11:17.785067800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:11:17.785159 containerd[1512]: time="2025-07-15T23:11:17.785147080Z" level=info msg="runtime interface created" Jul 15 23:11:17.785159 containerd[1512]: time="2025-07-15T23:11:17.785155000Z" level=info msg="created NRI interface" Jul 15 23:11:17.785204 containerd[1512]: time="2025-07-15T23:11:17.785163280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:11:17.785204 containerd[1512]: time="2025-07-15T23:11:17.785174240Z" level=info msg="Connect containerd service" Jul 15 23:11:17.785204 containerd[1512]: time="2025-07-15T23:11:17.785200920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:11:17.785891 containerd[1512]: time="2025-07-15T23:11:17.785863080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:11:17.901421 containerd[1512]: time="2025-07-15T23:11:17.901352960Z" level=info msg="Start subscribing containerd event" Jul 15 23:11:17.901507 containerd[1512]: time="2025-07-15T23:11:17.901431640Z" level=info msg="Start recovering state" Jul 15 23:11:17.901527 containerd[1512]: time="2025-07-15T23:11:17.901512240Z" level=info msg="Start event monitor" Jul 15 23:11:17.901565 containerd[1512]: time="2025-07-15T23:11:17.901526640Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:11:17.901565 containerd[1512]: time="2025-07-15T23:11:17.901533760Z" level=info msg="Start streaming server" Jul 15 23:11:17.901565 containerd[1512]: time="2025-07-15T23:11:17.901544080Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:11:17.901565 containerd[1512]: time="2025-07-15T23:11:17.901550400Z" level=info msg="runtime interface starting up..." Jul 15 23:11:17.901565 containerd[1512]: time="2025-07-15T23:11:17.901555880Z" level=info msg="starting plugins..." Jul 15 23:11:17.901642 containerd[1512]: time="2025-07-15T23:11:17.901568360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:11:17.902109 containerd[1512]: time="2025-07-15T23:11:17.901810080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:11:17.902196 containerd[1512]: time="2025-07-15T23:11:17.902178880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:11:17.902347 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:11:17.903254 containerd[1512]: time="2025-07-15T23:11:17.903230320Z" level=info msg="containerd successfully booted in 0.139575s" Jul 15 23:11:17.975533 tar[1497]: linux-arm64/README.md Jul 15 23:11:17.998068 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:11:18.560243 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 15 23:11:18.565572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:11:18.567315 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:11:18.570116 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:11:18.572189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:18.582233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:11:18.595745 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:11:18.595935 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:11:18.598663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:11:18.610074 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:11:18.740196 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:11:18.758179 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:11:18.761762 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:11:18.779829 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:11:18.780034 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:11:18.783551 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:11:18.804068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:11:18.806662 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:11:18.808694 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:11:18.809745 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:11:19.120004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:19.121240 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:11:19.123122 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:11:19.124400 systemd[1]: Startup finished in 2.085s (kernel) + 5.172s (initrd) + 3.385s (userspace) = 10.643s. Jul 15 23:11:19.521266 kubelet[1611]: E0715 23:11:19.521145 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:11:19.523472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:11:19.523610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:11:19.523957 systemd[1]: kubelet.service: Consumed 808ms CPU time, 256.8M memory peak. Jul 15 23:11:23.832424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:11:23.833958 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:51742.service - OpenSSH per-connection server daemon (10.0.0.1:51742). Jul 15 23:11:23.916559 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:11:23.918354 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:23.926603 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:11:23.927596 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:11:23.933233 systemd-logind[1483]: New session 1 of user core. Jul 15 23:11:23.956087 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:11:23.958613 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:11:23.982221 (systemd)[1628]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:11:23.984575 systemd-logind[1483]: New session c1 of user core. Jul 15 23:11:24.095771 systemd[1628]: Queued start job for default target default.target. Jul 15 23:11:24.118991 systemd[1628]: Created slice app.slice - User Application Slice. Jul 15 23:11:24.119046 systemd[1628]: Reached target paths.target - Paths. Jul 15 23:11:24.119090 systemd[1628]: Reached target timers.target - Timers. Jul 15 23:11:24.120364 systemd[1628]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:11:24.129257 systemd[1628]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:11:24.129320 systemd[1628]: Reached target sockets.target - Sockets. Jul 15 23:11:24.129360 systemd[1628]: Reached target basic.target - Basic System. Jul 15 23:11:24.129388 systemd[1628]: Reached target default.target - Main User Target. Jul 15 23:11:24.129414 systemd[1628]: Startup finished in 138ms. Jul 15 23:11:24.129519 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:11:24.130826 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:11:24.192257 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:51752.service - OpenSSH per-connection server daemon (10.0.0.1:51752). Jul 15 23:11:24.246734 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 51752 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:11:24.247972 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:24.252514 systemd-logind[1483]: New session 2 of user core. Jul 15 23:11:24.263175 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:11:24.313463 sshd[1641]: Connection closed by 10.0.0.1 port 51752 Jul 15 23:11:24.313929 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jul 15 23:11:24.324993 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:51752.service: Deactivated successfully. Jul 15 23:11:24.328285 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:11:24.328903 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:11:24.331266 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Jul 15 23:11:24.332248 systemd-logind[1483]: Removed session 2. Jul 15 23:11:24.390616 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:11:24.391916 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:24.396365 systemd-logind[1483]: New session 3 of user core. Jul 15 23:11:24.411181 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:11:24.458849 sshd[1650]: Connection closed by 10.0.0.1 port 51756 Jul 15 23:11:24.459296 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jul 15 23:11:24.469963 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:51756.service: Deactivated successfully. Jul 15 23:11:24.472095 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:11:24.473208 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:11:24.475374 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:51762.service - OpenSSH per-connection server daemon (10.0.0.1:51762). Jul 15 23:11:24.475915 systemd-logind[1483]: Removed session 3. Jul 15 23:11:24.530543 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 51762 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:11:24.531771 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:24.536440 systemd-logind[1483]: New session 4 of user core. Jul 15 23:11:24.549201 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:11:24.600334 sshd[1658]: Connection closed by 10.0.0.1 port 51762 Jul 15 23:11:24.600680 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 15 23:11:24.611012 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:51762.service: Deactivated successfully. Jul 15 23:11:24.613369 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:11:24.613974 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:11:24.616254 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:51768.service - OpenSSH per-connection server daemon (10.0.0.1:51768). Jul 15 23:11:24.616697 systemd-logind[1483]: Removed session 4. Jul 15 23:11:24.666881 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 51768 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:11:24.668473 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:11:24.672491 systemd-logind[1483]: New session 5 of user core. Jul 15 23:11:24.693203 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:11:24.752380 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:11:24.754482 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:11:25.146437 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:11:25.158311 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:11:25.423601 dockerd[1687]: time="2025-07-15T23:11:25.423479371Z" level=info msg="Starting up" Jul 15 23:11:25.424972 dockerd[1687]: time="2025-07-15T23:11:25.424938529Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:11:25.536102 dockerd[1687]: time="2025-07-15T23:11:25.536063409Z" level=info msg="Loading containers: start." Jul 15 23:11:25.544041 kernel: Initializing XFRM netlink socket Jul 15 23:11:25.733608 systemd-networkd[1435]: docker0: Link UP Jul 15 23:11:25.737007 dockerd[1687]: time="2025-07-15T23:11:25.736923958Z" level=info msg="Loading containers: done." Jul 15 23:11:25.749590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3815594759-merged.mount: Deactivated successfully. Jul 15 23:11:25.754821 dockerd[1687]: time="2025-07-15T23:11:25.754779681Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:11:25.754890 dockerd[1687]: time="2025-07-15T23:11:25.754863911Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:11:25.754982 dockerd[1687]: time="2025-07-15T23:11:25.754963558Z" level=info msg="Initializing buildkit" Jul 15 23:11:25.775542 dockerd[1687]: time="2025-07-15T23:11:25.775506631Z" level=info msg="Completed buildkit initialization" Jul 15 23:11:25.781574 dockerd[1687]: time="2025-07-15T23:11:25.781526221Z" level=info msg="Daemon has completed initialization" Jul 15 23:11:25.781686 dockerd[1687]: time="2025-07-15T23:11:25.781626507Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:11:25.781728 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:11:26.384681 containerd[1512]: time="2025-07-15T23:11:26.384618217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:11:27.005757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933030644.mount: Deactivated successfully. Jul 15 23:11:28.133247 containerd[1512]: time="2025-07-15T23:11:28.133051629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:28.133570 containerd[1512]: time="2025-07-15T23:11:28.133475493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Jul 15 23:11:28.134586 containerd[1512]: time="2025-07-15T23:11:28.134506548Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:28.136976 containerd[1512]: time="2025-07-15T23:11:28.136941346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:28.138217 containerd[1512]: time="2025-07-15T23:11:28.138092608Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.753431013s" Jul 15 23:11:28.138217 containerd[1512]: time="2025-07-15T23:11:28.138131639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Jul 15 23:11:28.138990 containerd[1512]: time="2025-07-15T23:11:28.138765277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:11:29.456060 containerd[1512]: time="2025-07-15T23:11:29.456002296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:29.457547 containerd[1512]: time="2025-07-15T23:11:29.457511745Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Jul 15 23:11:29.458203 containerd[1512]: time="2025-07-15T23:11:29.458167288Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:29.461596 containerd[1512]: time="2025-07-15T23:11:29.461555627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:29.462905 containerd[1512]: time="2025-07-15T23:11:29.462868669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.324077384s" Jul 15 23:11:29.462946 containerd[1512]: time="2025-07-15T23:11:29.462904107Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Jul 15 23:11:29.463527 containerd[1512]: time="2025-07-15T23:11:29.463463643Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:11:29.773981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:11:29.775603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:29.926685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:29.930445 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:11:29.968833 kubelet[1961]: E0715 23:11:29.968781 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:11:29.972135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:11:29.972272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:11:29.973161 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108M memory peak. Jul 15 23:11:31.158618 containerd[1512]: time="2025-07-15T23:11:31.158502591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:31.159362 containerd[1512]: time="2025-07-15T23:11:31.159106681Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Jul 15 23:11:31.160628 containerd[1512]: time="2025-07-15T23:11:31.160595489Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:31.162576 containerd[1512]: time="2025-07-15T23:11:31.162541140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:31.164426 containerd[1512]: time="2025-07-15T23:11:31.164388853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.700892887s" Jul 15 23:11:31.164479 containerd[1512]: time="2025-07-15T23:11:31.164424776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Jul 15 23:11:31.164936 containerd[1512]: time="2025-07-15T23:11:31.164913226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:11:32.082159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977819996.mount: Deactivated successfully. Jul 15 23:11:32.306093 containerd[1512]: time="2025-07-15T23:11:32.306047629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:32.306628 containerd[1512]: time="2025-07-15T23:11:32.306603566Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Jul 15 23:11:32.307336 containerd[1512]: time="2025-07-15T23:11:32.307311194Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:32.309085 containerd[1512]: time="2025-07-15T23:11:32.309042622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:32.309825 containerd[1512]: time="2025-07-15T23:11:32.309785855Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.144843739s" Jul 15 23:11:32.309825 containerd[1512]: time="2025-07-15T23:11:32.309821061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Jul 15 23:11:32.310410 containerd[1512]: time="2025-07-15T23:11:32.310376518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:11:33.133370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441837966.mount: Deactivated successfully. Jul 15 23:11:33.945757 containerd[1512]: time="2025-07-15T23:11:33.945690987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:33.946237 containerd[1512]: time="2025-07-15T23:11:33.946159198Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 23:11:33.947272 containerd[1512]: time="2025-07-15T23:11:33.947232335Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:33.950374 containerd[1512]: time="2025-07-15T23:11:33.950334452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:33.952177 containerd[1512]: time="2025-07-15T23:11:33.952138679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.641719562s" Jul 15 23:11:33.952223 containerd[1512]: time="2025-07-15T23:11:33.952188313Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:11:33.952715 containerd[1512]: time="2025-07-15T23:11:33.952681621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:11:34.359514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915290544.mount: Deactivated successfully. Jul 15 23:11:34.364544 containerd[1512]: time="2025-07-15T23:11:34.364491729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:11:34.364972 containerd[1512]: time="2025-07-15T23:11:34.364932310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 23:11:34.365852 containerd[1512]: time="2025-07-15T23:11:34.365810356Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:11:34.367590 containerd[1512]: time="2025-07-15T23:11:34.367555897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:11:34.368447 containerd[1512]: time="2025-07-15T23:11:34.368412001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 415.685261ms" Jul 15 23:11:34.368447 containerd[1512]: time="2025-07-15T23:11:34.368445093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:11:34.368978 containerd[1512]: time="2025-07-15T23:11:34.368950379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:11:34.912094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076508378.mount: Deactivated successfully. Jul 15 23:11:36.802788 containerd[1512]: time="2025-07-15T23:11:36.802734657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:36.803644 containerd[1512]: time="2025-07-15T23:11:36.803608358Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 15 23:11:36.804735 containerd[1512]: time="2025-07-15T23:11:36.804671355Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:36.807944 containerd[1512]: time="2025-07-15T23:11:36.807908511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:36.809174 containerd[1512]: time="2025-07-15T23:11:36.809133347Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.440145839s" Jul 15 23:11:36.809237 containerd[1512]: time="2025-07-15T23:11:36.809172957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 23:11:40.222532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:11:40.224153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:40.371628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:40.375340 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:11:40.410888 kubelet[2123]: E0715 23:11:40.410818 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:11:40.413412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:11:40.413550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:11:40.415114 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.5M memory peak. Jul 15 23:11:41.792410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:41.792579 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.5M memory peak. Jul 15 23:11:41.794670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:41.822318 systemd[1]: Reload requested from client PID 2140 ('systemctl') (unit session-5.scope)... Jul 15 23:11:41.822341 systemd[1]: Reloading... Jul 15 23:11:41.894126 zram_generator::config[2186]: No configuration found. Jul 15 23:11:42.043539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:42.129546 systemd[1]: Reloading finished in 306 ms. Jul 15 23:11:42.175968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:42.177697 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:42.180552 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:11:42.180809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:42.180850 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.2M memory peak. Jul 15 23:11:42.182448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:42.319176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:42.323415 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:11:42.358593 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:11:42.358593 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:11:42.358593 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:11:42.358944 kubelet[2230]: I0715 23:11:42.358696 2230 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:11:42.866645 kubelet[2230]: I0715 23:11:42.866592 2230 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:11:42.866645 kubelet[2230]: I0715 23:11:42.866628 2230 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:11:42.866940 kubelet[2230]: I0715 23:11:42.866911 2230 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:11:42.905698 kubelet[2230]: E0715 23:11:42.905633 2230 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:42.909975 kubelet[2230]: I0715 23:11:42.909818 2230 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:11:42.917480 kubelet[2230]: I0715 23:11:42.917459 2230 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:11:42.920627 kubelet[2230]: I0715 23:11:42.920603 2230 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:11:42.921298 kubelet[2230]: I0715 23:11:42.921256 2230 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:11:42.922128 kubelet[2230]: I0715 23:11:42.921302 2230 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:11:42.922128 kubelet[2230]: I0715 23:11:42.921809 2230 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:11:42.922128 kubelet[2230]: I0715 23:11:42.921824 2230 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:11:42.922128 kubelet[2230]: I0715 23:11:42.922065 2230 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:11:42.926303 kubelet[2230]: I0715 23:11:42.926277 2230 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:11:42.926365 kubelet[2230]: I0715 23:11:42.926311 2230 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:11:42.926365 kubelet[2230]: I0715 23:11:42.926335 2230 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:11:42.926365 kubelet[2230]: I0715 23:11:42.926345 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:11:42.927735 kubelet[2230]: W0715 23:11:42.927614 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:42.927735 kubelet[2230]: E0715 23:11:42.927673 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:42.928253 kubelet[2230]: W0715 23:11:42.928214 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:42.928365 kubelet[2230]: E0715 23:11:42.928345 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:42.931726 kubelet[2230]: I0715 23:11:42.931706 2230 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:11:42.932632 kubelet[2230]: I0715 23:11:42.932602 2230 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:11:42.932920 kubelet[2230]: W0715 23:11:42.932903 2230 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:11:42.936146 kubelet[2230]: I0715 23:11:42.936123 2230 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:11:42.936272 kubelet[2230]: I0715 23:11:42.936260 2230 server.go:1287] "Started kubelet" Jul 15 23:11:42.936674 kubelet[2230]: I0715 23:11:42.936567 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:11:42.936903 kubelet[2230]: I0715 23:11:42.936783 2230 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:11:42.937601 kubelet[2230]: I0715 23:11:42.937300 2230 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:11:42.937677 kubelet[2230]: I0715 23:11:42.937655 2230 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:11:42.939219 kubelet[2230]: E0715 23:11:42.938772 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18528f9b7a5df2bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:11:42.936224447 +0000 UTC m=+0.609690333,LastTimestamp:2025-07-15 23:11:42.936224447 +0000 UTC m=+0.609690333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:11:42.939504 kubelet[2230]: I0715 23:11:42.939489 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:11:42.941045 kubelet[2230]: I0715 23:11:42.940562 2230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:11:42.941109 kubelet[2230]: I0715 23:11:42.941097 2230 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:11:42.941236 kubelet[2230]: I0715 23:11:42.941205 2230 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:11:42.941274 kubelet[2230]: I0715 23:11:42.941265 2230 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:11:42.943392 kubelet[2230]: E0715 23:11:42.943366 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Jul 15 23:11:42.943692 kubelet[2230]: I0715 23:11:42.943661 2230 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:11:42.943899 kubelet[2230]: I0715 23:11:42.943880 2230 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:11:42.945718 kubelet[2230]: W0715 23:11:42.945152 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:42.945718 kubelet[2230]: E0715 23:11:42.945199 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:42.945718 kubelet[2230]: E0715 23:11:42.945708 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:42.946013 kubelet[2230]: I0715 23:11:42.945991 2230 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:11:42.946213 kubelet[2230]: E0715 23:11:42.946192 2230 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:11:42.955126 kubelet[2230]: I0715 23:11:42.955072 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:11:42.956145 kubelet[2230]: I0715 23:11:42.956102 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:11:42.956145 kubelet[2230]: I0715 23:11:42.956130 2230 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:11:42.956145 kubelet[2230]: I0715 23:11:42.956148 2230 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:11:42.956252 kubelet[2230]: I0715 23:11:42.956157 2230 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:11:42.956252 kubelet[2230]: E0715 23:11:42.956202 2230 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:11:42.957781 kubelet[2230]: W0715 23:11:42.957705 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:42.957906 kubelet[2230]: E0715 23:11:42.957780 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:42.962255 kubelet[2230]: I0715 23:11:42.962203 2230 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:11:42.962255 kubelet[2230]: I0715 23:11:42.962223 2230 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:11:42.962647 kubelet[2230]: I0715 23:11:42.962430 2230 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:11:43.046773 kubelet[2230]: E0715 23:11:43.046732 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:43.056909 kubelet[2230]: E0715 23:11:43.056876 2230 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:11:43.144702 kubelet[2230]: E0715 23:11:43.144582 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Jul 15 23:11:43.147717 kubelet[2230]: E0715 23:11:43.147680 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:43.248160 kubelet[2230]: E0715 23:11:43.248123 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:43.257316 kubelet[2230]: E0715 23:11:43.257287 2230 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:11:43.309458 kubelet[2230]: I0715 23:11:43.309146 2230 policy_none.go:49] "None policy: Start" Jul 15 23:11:43.309458 kubelet[2230]: I0715 23:11:43.309177 2230 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:11:43.309458 kubelet[2230]: I0715 23:11:43.309191 2230 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:11:43.349253 kubelet[2230]: E0715 23:11:43.349218 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:43.373471 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:11:43.392541 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:11:43.395399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:11:43.415781 kubelet[2230]: I0715 23:11:43.415751 2230 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:11:43.416054 kubelet[2230]: I0715 23:11:43.415960 2230 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:11:43.416054 kubelet[2230]: I0715 23:11:43.415972 2230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:11:43.416230 kubelet[2230]: I0715 23:11:43.416209 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:11:43.417171 kubelet[2230]: E0715 23:11:43.417137 2230 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:11:43.417225 kubelet[2230]: E0715 23:11:43.417180 2230 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:11:43.518107 kubelet[2230]: I0715 23:11:43.518065 2230 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:11:43.518573 kubelet[2230]: E0715 23:11:43.518529 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 15 23:11:43.545248 kubelet[2230]: E0715 23:11:43.545210 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Jul 15 23:11:43.665417 systemd[1]: Created slice kubepods-burstable-podbfcbff80425eea366b65465670a5f4e7.slice - libcontainer container kubepods-burstable-podbfcbff80425eea366b65465670a5f4e7.slice. Jul 15 23:11:43.675886 kubelet[2230]: E0715 23:11:43.675797 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:43.679605 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Jul 15 23:11:43.691743 kubelet[2230]: E0715 23:11:43.691697 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:43.694851 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Jul 15 23:11:43.696790 kubelet[2230]: E0715 23:11:43.696761 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:43.719835 kubelet[2230]: I0715 23:11:43.719801 2230 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:11:43.720173 kubelet[2230]: E0715 23:11:43.720141 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 15 23:11:43.745663 kubelet[2230]: I0715 23:11:43.745624 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:43.745663 kubelet[2230]: I0715 23:11:43.745664 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:43.745752 kubelet[2230]: I0715 23:11:43.745691 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:43.745752 kubelet[2230]: I0715 23:11:43.745707 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:43.745752 kubelet[2230]: I0715 23:11:43.745723 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:43.745752 kubelet[2230]: I0715 23:11:43.745740 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:43.745858 kubelet[2230]: I0715 23:11:43.745777 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:43.745858 kubelet[2230]: I0715 23:11:43.745839 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:11:43.745899 kubelet[2230]: I0715 23:11:43.745859 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:43.762166 kubelet[2230]: W0715 23:11:43.762114 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:43.762237 kubelet[2230]: E0715 23:11:43.762172 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:43.824852 kubelet[2230]: W0715 23:11:43.824815 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:43.824921 kubelet[2230]: E0715 23:11:43.824858 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:43.895145 kubelet[2230]: W0715 23:11:43.895073 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:43.895145 kubelet[2230]: E0715 23:11:43.895143 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:43.930080 kubelet[2230]: W0715 23:11:43.929962 2230 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 15 23:11:43.930080 kubelet[2230]: E0715 23:11:43.930038 2230 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:11:43.976700 kubelet[2230]: E0715 23:11:43.976672 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:43.977318 containerd[1512]: time="2025-07-15T23:11:43.977273390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfcbff80425eea366b65465670a5f4e7,Namespace:kube-system,Attempt:0,}" Jul 15 23:11:43.992557 kubelet[2230]: E0715 23:11:43.992528 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:43.993082 containerd[1512]: time="2025-07-15T23:11:43.993051090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Jul 15 23:11:43.993725 containerd[1512]: time="2025-07-15T23:11:43.993678149Z" level=info msg="connecting to shim 063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70" address="unix:///run/containerd/s/1c4b3463ee39af189d8da922a18df03b91b8afcf5885ed78a18da2f9eddd5ac8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:11:43.997594 kubelet[2230]: E0715 23:11:43.997553 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:43.997987 containerd[1512]: time="2025-07-15T23:11:43.997956974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Jul 15 23:11:44.018298 containerd[1512]: time="2025-07-15T23:11:44.018257579Z" level=info msg="connecting to shim 3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1" address="unix:///run/containerd/s/8117d01b0ff8de019d66d8e45ccb10adb97207ab768aeee5fa0b455eb25180fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:11:44.024229 systemd[1]: Started cri-containerd-063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70.scope - libcontainer container 063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70. Jul 15 23:11:44.035201 systemd[1]: Started cri-containerd-3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1.scope - libcontainer container 3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1. Jul 15 23:11:44.036840 containerd[1512]: time="2025-07-15T23:11:44.036794111Z" level=info msg="connecting to shim 0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f" address="unix:///run/containerd/s/22c7a3ff3695f30a6f00e25ae577af919de4a401521314fcd6bd0c0fa373f0b0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:11:44.059245 systemd[1]: Started cri-containerd-0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f.scope - libcontainer container 0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f. Jul 15 23:11:44.083582 containerd[1512]: time="2025-07-15T23:11:44.080314311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfcbff80425eea366b65465670a5f4e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70\"" Jul 15 23:11:44.086477 kubelet[2230]: E0715 23:11:44.086451 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:44.088911 containerd[1512]: time="2025-07-15T23:11:44.088810965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1\"" Jul 15 23:11:44.089550 kubelet[2230]: E0715 23:11:44.089498 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:44.089607 containerd[1512]: time="2025-07-15T23:11:44.089499774Z" level=info msg="CreateContainer within sandbox \"063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:11:44.091040 containerd[1512]: time="2025-07-15T23:11:44.090932209Z" level=info msg="CreateContainer within sandbox \"3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:11:44.098129 containerd[1512]: time="2025-07-15T23:11:44.098088426Z" level=info msg="Container ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:44.102125 containerd[1512]: time="2025-07-15T23:11:44.102092143Z" level=info msg="Container 95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:44.105927 containerd[1512]: time="2025-07-15T23:11:44.105886914Z" level=info msg="CreateContainer within sandbox \"063768fc66c22e4e42c109e490f8559438c5d4c6b1fe1b88e9dafa256aea7e70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81\"" Jul 15 23:11:44.106854 containerd[1512]: time="2025-07-15T23:11:44.106536902Z" level=info msg="StartContainer for \"ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81\"" Jul 15 23:11:44.107746 containerd[1512]: time="2025-07-15T23:11:44.107716970Z" level=info msg="connecting to shim ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81" address="unix:///run/containerd/s/1c4b3463ee39af189d8da922a18df03b91b8afcf5885ed78a18da2f9eddd5ac8" protocol=ttrpc version=3 Jul 15 23:11:44.108977 containerd[1512]: time="2025-07-15T23:11:44.108943858Z" level=info msg="CreateContainer within sandbox \"3498da4bd58c4f94691d1604d5de1b02fb0edec7614393aaba8eacb73e13efc1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c\"" Jul 15 23:11:44.110479 containerd[1512]: time="2025-07-15T23:11:44.110345306Z" level=info msg="StartContainer for \"95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c\"" Jul 15 23:11:44.112367 containerd[1512]: time="2025-07-15T23:11:44.112328213Z" level=info msg="connecting to shim 95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c" address="unix:///run/containerd/s/8117d01b0ff8de019d66d8e45ccb10adb97207ab768aeee5fa0b455eb25180fc" protocol=ttrpc version=3 Jul 15 23:11:44.113466 containerd[1512]: time="2025-07-15T23:11:44.113421201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f\"" Jul 15 23:11:44.114167 kubelet[2230]: E0715 23:11:44.114118 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:44.117161 containerd[1512]: time="2025-07-15T23:11:44.117118776Z" level=info msg="CreateContainer within sandbox \"0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:11:44.121541 kubelet[2230]: I0715 23:11:44.121512 2230 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:11:44.121970 kubelet[2230]: E0715 23:11:44.121881 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 15 23:11:44.123932 containerd[1512]: time="2025-07-15T23:11:44.123899882Z" level=info msg="Container 880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:44.126181 systemd[1]: Started cri-containerd-ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81.scope - libcontainer container ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81. Jul 15 23:11:44.132829 containerd[1512]: time="2025-07-15T23:11:44.132790318Z" level=info msg="CreateContainer within sandbox \"0c1936d0251b4bd8f8d82790dd2cab5f2b9656620a8e6319dd14fc6c78a1905f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452\"" Jul 15 23:11:44.133581 containerd[1512]: time="2025-07-15T23:11:44.133416196Z" level=info msg="StartContainer for \"880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452\"" Jul 15 23:11:44.134809 containerd[1512]: time="2025-07-15T23:11:44.134780622Z" level=info msg="connecting to shim 880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452" address="unix:///run/containerd/s/22c7a3ff3695f30a6f00e25ae577af919de4a401521314fcd6bd0c0fa373f0b0" protocol=ttrpc version=3 Jul 15 23:11:44.139183 systemd[1]: Started cri-containerd-95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c.scope - libcontainer container 95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c. Jul 15 23:11:44.160257 systemd[1]: Started cri-containerd-880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452.scope - libcontainer container 880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452. Jul 15 23:11:44.173479 containerd[1512]: time="2025-07-15T23:11:44.173429616Z" level=info msg="StartContainer for \"ceec945dd6a48a0117380759e72cea400b0ea110b39c79de5ccc0b7cceaa3f81\" returns successfully" Jul 15 23:11:44.202121 containerd[1512]: time="2025-07-15T23:11:44.200850027Z" level=info msg="StartContainer for \"95e345d3c7750f1c3b8ed29d21c87ceccc54814a6a0ba70f28a3ee049f915e7c\" returns successfully" Jul 15 23:11:44.216543 containerd[1512]: time="2025-07-15T23:11:44.216490103Z" level=info msg="StartContainer for \"880729f524f480995bb2dac0fa39a993cf26261c4eae23775af1fd26cefda452\" returns successfully" Jul 15 23:11:44.346973 kubelet[2230]: E0715 23:11:44.346361 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Jul 15 23:11:44.924105 kubelet[2230]: I0715 23:11:44.924073 2230 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:11:44.969042 kubelet[2230]: E0715 23:11:44.968592 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:44.969042 kubelet[2230]: E0715 23:11:44.968673 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:44.969042 kubelet[2230]: E0715 23:11:44.968741 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:44.969042 kubelet[2230]: E0715 23:11:44.968819 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:44.973444 kubelet[2230]: E0715 23:11:44.973417 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:44.976196 kubelet[2230]: E0715 23:11:44.976172 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:45.743413 kubelet[2230]: E0715 23:11:45.742682 2230 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18528f9b7a5df2bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:11:42.936224447 +0000 UTC m=+0.609690333,LastTimestamp:2025-07-15 23:11:42.936224447 +0000 UTC m=+0.609690333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:11:45.794732 kubelet[2230]: I0715 23:11:45.794675 2230 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:11:45.794732 kubelet[2230]: E0715 23:11:45.794719 2230 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:11:45.815479 kubelet[2230]: E0715 23:11:45.815444 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:45.915776 kubelet[2230]: E0715 23:11:45.915648 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:45.975799 kubelet[2230]: E0715 23:11:45.975759 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:45.976763 kubelet[2230]: E0715 23:11:45.975998 2230 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:11:45.976763 kubelet[2230]: E0715 23:11:45.976818 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:45.977081 kubelet[2230]: E0715 23:11:45.977065 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:46.016415 kubelet[2230]: E0715 23:11:46.016083 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.116251 kubelet[2230]: E0715 23:11:46.116208 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.217319 kubelet[2230]: E0715 23:11:46.217269 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.317844 kubelet[2230]: E0715 23:11:46.317722 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.418449 kubelet[2230]: E0715 23:11:46.418408 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.519526 kubelet[2230]: E0715 23:11:46.519460 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:46.642452 kubelet[2230]: I0715 23:11:46.642323 2230 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:11:46.655886 kubelet[2230]: I0715 23:11:46.655824 2230 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:46.660285 kubelet[2230]: I0715 23:11:46.660261 2230 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:46.930134 kubelet[2230]: I0715 23:11:46.929738 2230 apiserver.go:52] "Watching apiserver" Jul 15 23:11:46.932865 kubelet[2230]: E0715 23:11:46.932834 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:46.942121 kubelet[2230]: I0715 23:11:46.942089 2230 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:11:46.976468 kubelet[2230]: E0715 23:11:46.976424 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:46.977197 kubelet[2230]: E0715 23:11:46.976482 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:47.769568 systemd[1]: Reload requested from client PID 2508 ('systemctl') (unit session-5.scope)... Jul 15 23:11:47.769581 systemd[1]: Reloading... Jul 15 23:11:47.838052 zram_generator::config[2551]: No configuration found. Jul 15 23:11:47.913695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:11:48.011556 systemd[1]: Reloading finished in 241 ms. Jul 15 23:11:48.040734 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:48.054838 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:11:48.055083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:48.055141 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 127.5M memory peak. Jul 15 23:11:48.056783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:11:48.181791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:11:48.185405 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:11:48.220816 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:11:48.220816 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:11:48.220816 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:11:48.220816 kubelet[2593]: I0715 23:11:48.219686 2593 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:11:48.226647 kubelet[2593]: I0715 23:11:48.226618 2593 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:11:48.226647 kubelet[2593]: I0715 23:11:48.226642 2593 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:11:48.226904 kubelet[2593]: I0715 23:11:48.226891 2593 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:11:48.228206 kubelet[2593]: I0715 23:11:48.228169 2593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:11:48.231199 kubelet[2593]: I0715 23:11:48.231174 2593 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:11:48.234614 kubelet[2593]: I0715 23:11:48.234594 2593 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:11:48.237135 kubelet[2593]: I0715 23:11:48.237111 2593 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:11:48.237321 kubelet[2593]: I0715 23:11:48.237287 2593 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:11:48.237479 kubelet[2593]: I0715 23:11:48.237310 2593 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:11:48.237557 kubelet[2593]: I0715 23:11:48.237482 2593 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:11:48.237557 kubelet[2593]: I0715 23:11:48.237498 2593 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:11:48.237557 kubelet[2593]: I0715 23:11:48.237539 2593 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:11:48.238057 kubelet[2593]: I0715 23:11:48.237661 2593 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:11:48.238057 kubelet[2593]: I0715 23:11:48.237675 2593 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:11:48.238057 kubelet[2593]: I0715 23:11:48.237699 2593 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:11:48.238057 kubelet[2593]: I0715 23:11:48.237708 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:11:48.238626 kubelet[2593]: I0715 23:11:48.238587 2593 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:11:48.239778 kubelet[2593]: I0715 23:11:48.239761 2593 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:11:48.240437 kubelet[2593]: I0715 23:11:48.240409 2593 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:11:48.240572 kubelet[2593]: I0715 23:11:48.240560 2593 server.go:1287] "Started kubelet" Jul 15 23:11:48.242252 kubelet[2593]: I0715 23:11:48.242186 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:11:48.242463 kubelet[2593]: I0715 23:11:48.242445 2593 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:11:48.242538 kubelet[2593]: I0715 23:11:48.242516 2593 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:11:48.243555 kubelet[2593]: I0715 23:11:48.243533 2593 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:11:48.245132 kubelet[2593]: I0715 23:11:48.245109 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:11:48.246093 kubelet[2593]: I0715 23:11:48.245331 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:11:48.248106 kubelet[2593]: E0715 23:11:48.248087 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:11:48.249054 kubelet[2593]: E0715 23:11:48.248385 2593 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:11:48.251471 kubelet[2593]: I0715 23:11:48.251448 2593 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:11:48.251684 kubelet[2593]: I0715 23:11:48.251660 2593 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:11:48.251804 kubelet[2593]: I0715 23:11:48.251787 2593 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:11:48.260930 kubelet[2593]: I0715 23:11:48.260865 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:11:48.262634 kubelet[2593]: I0715 23:11:48.262605 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:11:48.262634 kubelet[2593]: I0715 23:11:48.262633 2593 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:11:48.262720 kubelet[2593]: I0715 23:11:48.262652 2593 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:11:48.262720 kubelet[2593]: I0715 23:11:48.262662 2593 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:11:48.262720 kubelet[2593]: E0715 23:11:48.262706 2593 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:11:48.263090 kubelet[2593]: I0715 23:11:48.262964 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:11:48.268823 kubelet[2593]: I0715 23:11:48.268797 2593 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:11:48.268823 kubelet[2593]: I0715 23:11:48.268817 2593 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:11:48.297131 kubelet[2593]: I0715 23:11:48.297043 2593 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:11:48.297131 kubelet[2593]: I0715 23:11:48.297067 2593 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:11:48.297131 kubelet[2593]: I0715 23:11:48.297090 2593 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:11:48.297265 kubelet[2593]: I0715 23:11:48.297244 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:11:48.297291 kubelet[2593]: I0715 23:11:48.297257 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:11:48.297291 kubelet[2593]: I0715 23:11:48.297276 2593 policy_none.go:49] "None policy: Start" Jul 15 23:11:48.297291 kubelet[2593]: I0715 23:11:48.297285 2593 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:11:48.297360 kubelet[2593]: I0715 23:11:48.297295 2593 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:11:48.297914 kubelet[2593]: I0715 23:11:48.297389 2593 state_mem.go:75] "Updated machine memory state" Jul 15 23:11:48.302309 kubelet[2593]: I0715 23:11:48.302277 2593 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:11:48.302457 kubelet[2593]: I0715 23:11:48.302439 2593 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:11:48.302507 kubelet[2593]: I0715 23:11:48.302458 2593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:11:48.303037 kubelet[2593]: I0715 23:11:48.303013 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:11:48.305653 kubelet[2593]: E0715 23:11:48.305630 2593 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:11:48.364002 kubelet[2593]: I0715 23:11:48.363962 2593 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:48.364289 kubelet[2593]: I0715 23:11:48.363962 2593 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.364370 kubelet[2593]: I0715 23:11:48.364076 2593 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:11:48.391516 kubelet[2593]: E0715 23:11:48.391381 2593 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:11:48.392639 kubelet[2593]: E0715 23:11:48.392540 2593 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:48.392769 kubelet[2593]: E0715 23:11:48.392697 2593 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.406143 kubelet[2593]: I0715 23:11:48.405997 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:11:48.411917 kubelet[2593]: I0715 23:11:48.411892 2593 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 23:11:48.412050 kubelet[2593]: I0715 23:11:48.411964 2593 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:11:48.453125 kubelet[2593]: I0715 23:11:48.453068 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:48.453125 kubelet[2593]: I0715 23:11:48.453123 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:48.453125 kubelet[2593]: I0715 23:11:48.453148 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.453341 kubelet[2593]: I0715 23:11:48.453166 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.453341 kubelet[2593]: I0715 23:11:48.453182 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.453341 kubelet[2593]: I0715 23:11:48.453210 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:11:48.453341 kubelet[2593]: I0715 23:11:48.453226 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfcbff80425eea366b65465670a5f4e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfcbff80425eea366b65465670a5f4e7\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:48.453341 kubelet[2593]: I0715 23:11:48.453241 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.453459 kubelet[2593]: I0715 23:11:48.453257 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:11:48.692764 kubelet[2593]: E0715 23:11:48.692641 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:48.693317 kubelet[2593]: E0715 23:11:48.693125 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:48.693317 kubelet[2593]: E0715 23:11:48.693166 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:49.238611 kubelet[2593]: I0715 23:11:49.238565 2593 apiserver.go:52] "Watching apiserver" Jul 15 23:11:49.252255 kubelet[2593]: I0715 23:11:49.252207 2593 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:11:49.282450 kubelet[2593]: I0715 23:11:49.282416 2593 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:49.284539 kubelet[2593]: E0715 23:11:49.282918 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:49.284539 kubelet[2593]: E0715 23:11:49.283481 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:49.368052 kubelet[2593]: E0715 23:11:49.367791 2593 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:11:49.368052 kubelet[2593]: E0715 23:11:49.367961 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:49.416724 kubelet[2593]: I0715 23:11:49.416646 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.415961368 podStartE2EDuration="3.415961368s" podCreationTimestamp="2025-07-15 23:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:11:49.4151863 +0000 UTC m=+1.226756196" watchObservedRunningTime="2025-07-15 23:11:49.415961368 +0000 UTC m=+1.227531224" Jul 15 23:11:49.416888 kubelet[2593]: I0715 23:11:49.416789 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.41678062 podStartE2EDuration="3.41678062s" podCreationTimestamp="2025-07-15 23:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:11:49.367936269 +0000 UTC m=+1.179506165" watchObservedRunningTime="2025-07-15 23:11:49.41678062 +0000 UTC m=+1.228350516" Jul 15 23:11:49.437261 kubelet[2593]: I0715 23:11:49.436521 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.436489433 podStartE2EDuration="3.436489433s" podCreationTimestamp="2025-07-15 23:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:11:49.436270305 +0000 UTC m=+1.247840241" watchObservedRunningTime="2025-07-15 23:11:49.436489433 +0000 UTC m=+1.248059329" Jul 15 23:11:49.614211 sudo[1667]: pam_unix(sudo:session): session closed for user root Jul 15 23:11:49.615898 sshd[1666]: Connection closed by 10.0.0.1 port 51768 Jul 15 23:11:49.616372 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 15 23:11:49.620152 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:51768.service: Deactivated successfully. Jul 15 23:11:49.621872 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:11:49.622078 systemd[1]: session-5.scope: Consumed 6.174s CPU time, 232.4M memory peak. Jul 15 23:11:49.622980 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:11:49.624222 systemd-logind[1483]: Removed session 5. Jul 15 23:11:50.287331 kubelet[2593]: E0715 23:11:50.285699 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:50.287331 kubelet[2593]: E0715 23:11:50.286053 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:54.859271 kubelet[2593]: I0715 23:11:54.859222 2593 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:11:54.860648 containerd[1512]: time="2025-07-15T23:11:54.859971060Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:11:54.860873 kubelet[2593]: I0715 23:11:54.860180 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:11:54.950551 kubelet[2593]: E0715 23:11:54.950494 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:55.293199 kubelet[2593]: E0715 23:11:55.293085 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:55.911952 systemd[1]: Created slice kubepods-besteffort-podfc6724f6_f506_498c_9246_b0d90757e968.slice - libcontainer container kubepods-besteffort-podfc6724f6_f506_498c_9246_b0d90757e968.slice. Jul 15 23:11:55.926185 systemd[1]: Created slice kubepods-burstable-pod384fbba3_8db3_4fd1_8dd8_c943703eeae8.slice - libcontainer container kubepods-burstable-pod384fbba3_8db3_4fd1_8dd8_c943703eeae8.slice. Jul 15 23:11:56.003538 kubelet[2593]: I0715 23:11:56.003504 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/384fbba3-8db3-4fd1-8dd8-c943703eeae8-cni-plugin\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.004102 kubelet[2593]: I0715 23:11:56.004007 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/384fbba3-8db3-4fd1-8dd8-c943703eeae8-xtables-lock\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.004102 kubelet[2593]: I0715 23:11:56.004061 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc6724f6-f506-498c-9246-b0d90757e968-lib-modules\") pod \"kube-proxy-9d8hd\" (UID: \"fc6724f6-f506-498c-9246-b0d90757e968\") " pod="kube-system/kube-proxy-9d8hd" Jul 15 23:11:56.004102 kubelet[2593]: I0715 23:11:56.004080 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54nn\" (UniqueName: \"kubernetes.io/projected/fc6724f6-f506-498c-9246-b0d90757e968-kube-api-access-c54nn\") pod \"kube-proxy-9d8hd\" (UID: \"fc6724f6-f506-498c-9246-b0d90757e968\") " pod="kube-system/kube-proxy-9d8hd" Jul 15 23:11:56.004255 kubelet[2593]: I0715 23:11:56.004133 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/384fbba3-8db3-4fd1-8dd8-c943703eeae8-run\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.004255 kubelet[2593]: I0715 23:11:56.004196 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc6724f6-f506-498c-9246-b0d90757e968-xtables-lock\") pod \"kube-proxy-9d8hd\" (UID: \"fc6724f6-f506-498c-9246-b0d90757e968\") " pod="kube-system/kube-proxy-9d8hd" Jul 15 23:11:56.004255 kubelet[2593]: I0715 23:11:56.004225 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/384fbba3-8db3-4fd1-8dd8-c943703eeae8-cni\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.004255 kubelet[2593]: I0715 23:11:56.004250 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/384fbba3-8db3-4fd1-8dd8-c943703eeae8-flannel-cfg\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.005268 kubelet[2593]: I0715 23:11:56.004274 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc6724f6-f506-498c-9246-b0d90757e968-kube-proxy\") pod \"kube-proxy-9d8hd\" (UID: \"fc6724f6-f506-498c-9246-b0d90757e968\") " pod="kube-system/kube-proxy-9d8hd" Jul 15 23:11:56.005268 kubelet[2593]: I0715 23:11:56.004295 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrp28\" (UniqueName: \"kubernetes.io/projected/384fbba3-8db3-4fd1-8dd8-c943703eeae8-kube-api-access-wrp28\") pod \"kube-flannel-ds-cb5xh\" (UID: \"384fbba3-8db3-4fd1-8dd8-c943703eeae8\") " pod="kube-flannel/kube-flannel-ds-cb5xh" Jul 15 23:11:56.224136 kubelet[2593]: E0715 23:11:56.223645 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:56.224481 containerd[1512]: time="2025-07-15T23:11:56.224431523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d8hd,Uid:fc6724f6-f506-498c-9246-b0d90757e968,Namespace:kube-system,Attempt:0,}" Jul 15 23:11:56.230554 kubelet[2593]: E0715 23:11:56.230458 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:56.231185 containerd[1512]: time="2025-07-15T23:11:56.231150249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cb5xh,Uid:384fbba3-8db3-4fd1-8dd8-c943703eeae8,Namespace:kube-flannel,Attempt:0,}" Jul 15 23:11:56.252225 containerd[1512]: time="2025-07-15T23:11:56.250826765Z" level=info msg="connecting to shim da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324" address="unix:///run/containerd/s/9a924f1c7b6514281424122ef3c30f4f2a020124fd1c2a2b42250cc4cdc656ca" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:11:56.256123 containerd[1512]: time="2025-07-15T23:11:56.256079395Z" level=info msg="connecting to shim 35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd" address="unix:///run/containerd/s/2af79fb2a6174f74de375a6fe05a97424433e43181682810f04ddbad5626e29d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:11:56.281175 systemd[1]: Started cri-containerd-da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324.scope - libcontainer container da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324. Jul 15 23:11:56.284801 systemd[1]: Started cri-containerd-35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd.scope - libcontainer container 35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd. Jul 15 23:11:56.296048 kubelet[2593]: E0715 23:11:56.295514 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:56.329939 containerd[1512]: time="2025-07-15T23:11:56.329884597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d8hd,Uid:fc6724f6-f506-498c-9246-b0d90757e968,Namespace:kube-system,Attempt:0,} returns sandbox id \"da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324\"" Jul 15 23:11:56.330719 kubelet[2593]: E0715 23:11:56.330680 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:56.331796 containerd[1512]: time="2025-07-15T23:11:56.331750610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cb5xh,Uid:384fbba3-8db3-4fd1-8dd8-c943703eeae8,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\"" Jul 15 23:11:56.332451 kubelet[2593]: E0715 23:11:56.332409 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:56.333348 containerd[1512]: time="2025-07-15T23:11:56.333309246Z" level=info msg="CreateContainer within sandbox \"da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:11:56.335323 containerd[1512]: time="2025-07-15T23:11:56.335289155Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 15 23:11:56.344043 containerd[1512]: time="2025-07-15T23:11:56.343983031Z" level=info msg="Container 744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:56.351224 containerd[1512]: time="2025-07-15T23:11:56.351170899Z" level=info msg="CreateContainer within sandbox \"da16721822c8867d1d16e1da7482599df4b1f9276aa4b136e157a8c1756bc324\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0\"" Jul 15 23:11:56.351812 containerd[1512]: time="2025-07-15T23:11:56.351784772Z" level=info msg="StartContainer for \"744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0\"" Jul 15 23:11:56.353292 containerd[1512]: time="2025-07-15T23:11:56.353255427Z" level=info msg="connecting to shim 744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0" address="unix:///run/containerd/s/9a924f1c7b6514281424122ef3c30f4f2a020124fd1c2a2b42250cc4cdc656ca" protocol=ttrpc version=3 Jul 15 23:11:56.380258 systemd[1]: Started cri-containerd-744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0.scope - libcontainer container 744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0. Jul 15 23:11:56.418963 containerd[1512]: time="2025-07-15T23:11:56.417925125Z" level=info msg="StartContainer for \"744efce09d87349eadf5082b753cddc77c817ca87580c0061bfd2583521ed5b0\" returns successfully" Jul 15 23:11:56.472547 kubelet[2593]: E0715 23:11:56.472511 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:57.300618 kubelet[2593]: E0715 23:11:57.300577 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:57.301091 kubelet[2593]: E0715 23:11:57.301063 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:57.380581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852577233.mount: Deactivated successfully. Jul 15 23:11:57.406439 containerd[1512]: time="2025-07-15T23:11:57.406391636Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:57.406822 containerd[1512]: time="2025-07-15T23:11:57.406788639Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Jul 15 23:11:57.407592 containerd[1512]: time="2025-07-15T23:11:57.407569327Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:57.409481 containerd[1512]: time="2025-07-15T23:11:57.409441243Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:57.410074 containerd[1512]: time="2025-07-15T23:11:57.410048804Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.074723616s" Jul 15 23:11:57.410120 containerd[1512]: time="2025-07-15T23:11:57.410079798Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jul 15 23:11:57.412224 containerd[1512]: time="2025-07-15T23:11:57.412197426Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 15 23:11:57.422223 containerd[1512]: time="2025-07-15T23:11:57.418434893Z" level=info msg="Container 3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:57.423959 containerd[1512]: time="2025-07-15T23:11:57.423923145Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\"" Jul 15 23:11:57.424534 containerd[1512]: time="2025-07-15T23:11:57.424412490Z" level=info msg="StartContainer for \"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\"" Jul 15 23:11:57.425652 containerd[1512]: time="2025-07-15T23:11:57.425600459Z" level=info msg="connecting to shim 3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120" address="unix:///run/containerd/s/2af79fb2a6174f74de375a6fe05a97424433e43181682810f04ddbad5626e29d" protocol=ttrpc version=3 Jul 15 23:11:57.441171 systemd[1]: Started cri-containerd-3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120.scope - libcontainer container 3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120. Jul 15 23:11:57.469290 containerd[1512]: time="2025-07-15T23:11:57.469208654Z" level=info msg="StartContainer for \"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\" returns successfully" Jul 15 23:11:57.471867 systemd[1]: cri-containerd-3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120.scope: Deactivated successfully. Jul 15 23:11:57.477960 containerd[1512]: time="2025-07-15T23:11:57.477905522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\" id:\"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\" pid:2938 exited_at:{seconds:1752621117 nanos:474953616}" Jul 15 23:11:57.478212 containerd[1512]: time="2025-07-15T23:11:57.477905682Z" level=info msg="received exit event container_id:\"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\" id:\"3eb782335eb7591699e6625a6b5dff2cb947c1f29b4686898af3c94d5f266120\" pid:2938 exited_at:{seconds:1752621117 nanos:474953616}" Jul 15 23:11:58.245270 kubelet[2593]: E0715 23:11:58.245193 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:58.264288 kubelet[2593]: I0715 23:11:58.263820 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9d8hd" podStartSLOduration=3.263803732 podStartE2EDuration="3.263803732s" podCreationTimestamp="2025-07-15 23:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:11:57.322223092 +0000 UTC m=+9.133793028" watchObservedRunningTime="2025-07-15 23:11:58.263803732 +0000 UTC m=+10.075373628" Jul 15 23:11:58.305417 kubelet[2593]: E0715 23:11:58.304581 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:58.305417 kubelet[2593]: E0715 23:11:58.304947 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:11:58.307920 containerd[1512]: time="2025-07-15T23:11:58.307802507Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 15 23:11:59.432338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225991118.mount: Deactivated successfully. Jul 15 23:11:59.910047 containerd[1512]: time="2025-07-15T23:11:59.909934678Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:59.910825 containerd[1512]: time="2025-07-15T23:11:59.910797610Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jul 15 23:11:59.913062 containerd[1512]: time="2025-07-15T23:11:59.912741398Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:59.915500 containerd[1512]: time="2025-07-15T23:11:59.915469051Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:11:59.916695 containerd[1512]: time="2025-07-15T23:11:59.916658968Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.608817228s" Jul 15 23:11:59.916799 containerd[1512]: time="2025-07-15T23:11:59.916781827Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jul 15 23:11:59.920818 containerd[1512]: time="2025-07-15T23:11:59.920786822Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:11:59.925744 containerd[1512]: time="2025-07-15T23:11:59.925703541Z" level=info msg="Container f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:11:59.930891 containerd[1512]: time="2025-07-15T23:11:59.930835903Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\"" Jul 15 23:11:59.931869 containerd[1512]: time="2025-07-15T23:11:59.931718633Z" level=info msg="StartContainer for \"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\"" Jul 15 23:11:59.932738 containerd[1512]: time="2025-07-15T23:11:59.932713982Z" level=info msg="connecting to shim f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da" address="unix:///run/containerd/s/2af79fb2a6174f74de375a6fe05a97424433e43181682810f04ddbad5626e29d" protocol=ttrpc version=3 Jul 15 23:11:59.955179 systemd[1]: Started cri-containerd-f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da.scope - libcontainer container f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da. Jul 15 23:11:59.999168 systemd[1]: cri-containerd-f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da.scope: Deactivated successfully. Jul 15 23:12:00.001722 containerd[1512]: time="2025-07-15T23:12:00.001681195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\" id:\"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\" pid:3013 exited_at:{seconds:1752621119 nanos:999745200}" Jul 15 23:12:00.002941 containerd[1512]: time="2025-07-15T23:12:00.002906039Z" level=info msg="StartContainer for \"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\" returns successfully" Jul 15 23:12:00.010584 containerd[1512]: time="2025-07-15T23:12:00.010527377Z" level=info msg="received exit event container_id:\"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\" id:\"f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da\" pid:3013 exited_at:{seconds:1752621119 nanos:999745200}" Jul 15 23:12:00.085270 kubelet[2593]: I0715 23:12:00.084231 2593 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:12:00.154054 systemd[1]: Created slice kubepods-burstable-podfcfb0df1_61d3_4200_a067_53474974f802.slice - libcontainer container kubepods-burstable-podfcfb0df1_61d3_4200_a067_53474974f802.slice. Jul 15 23:12:00.161657 systemd[1]: Created slice kubepods-burstable-pod2c9bd12d_10a1_448c_8839_313011ba9fc8.slice - libcontainer container kubepods-burstable-pod2c9bd12d_10a1_448c_8839_313011ba9fc8.slice. Jul 15 23:12:00.233731 kubelet[2593]: I0715 23:12:00.233647 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6cw\" (UniqueName: \"kubernetes.io/projected/fcfb0df1-61d3-4200-a067-53474974f802-kube-api-access-gv6cw\") pod \"coredns-668d6bf9bc-8jbx8\" (UID: \"fcfb0df1-61d3-4200-a067-53474974f802\") " pod="kube-system/coredns-668d6bf9bc-8jbx8" Jul 15 23:12:00.233731 kubelet[2593]: I0715 23:12:00.233717 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrq7\" (UniqueName: \"kubernetes.io/projected/2c9bd12d-10a1-448c-8839-313011ba9fc8-kube-api-access-dzrq7\") pod \"coredns-668d6bf9bc-9gmgx\" (UID: \"2c9bd12d-10a1-448c-8839-313011ba9fc8\") " pod="kube-system/coredns-668d6bf9bc-9gmgx" Jul 15 23:12:00.233731 kubelet[2593]: I0715 23:12:00.233740 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcfb0df1-61d3-4200-a067-53474974f802-config-volume\") pod \"coredns-668d6bf9bc-8jbx8\" (UID: \"fcfb0df1-61d3-4200-a067-53474974f802\") " pod="kube-system/coredns-668d6bf9bc-8jbx8" Jul 15 23:12:00.233731 kubelet[2593]: I0715 23:12:00.233774 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c9bd12d-10a1-448c-8839-313011ba9fc8-config-volume\") pod \"coredns-668d6bf9bc-9gmgx\" (UID: \"2c9bd12d-10a1-448c-8839-313011ba9fc8\") " pod="kube-system/coredns-668d6bf9bc-9gmgx" Jul 15 23:12:00.309019 kubelet[2593]: E0715 23:12:00.308988 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:00.313512 containerd[1512]: time="2025-07-15T23:12:00.313447855Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 15 23:12:00.323721 containerd[1512]: time="2025-07-15T23:12:00.323679015Z" level=info msg="Container ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:00.329157 containerd[1512]: time="2025-07-15T23:12:00.329115063Z" level=info msg="CreateContainer within sandbox \"35054054869aac9397d5563f381eced8bbbd237c48961bb6a17dfb50373a2bcd\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c\"" Jul 15 23:12:00.329756 containerd[1512]: time="2025-07-15T23:12:00.329728765Z" level=info msg="StartContainer for \"ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c\"" Jul 15 23:12:00.330620 containerd[1512]: time="2025-07-15T23:12:00.330575629Z" level=info msg="connecting to shim ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c" address="unix:///run/containerd/s/2af79fb2a6174f74de375a6fe05a97424433e43181682810f04ddbad5626e29d" protocol=ttrpc version=3 Jul 15 23:12:00.349745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29aae66b9f674d2662cdd54b89a9549def6abfaf1c03d6a63b290a826fdf5da-rootfs.mount: Deactivated successfully. Jul 15 23:12:00.368258 systemd[1]: Started cri-containerd-ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c.scope - libcontainer container ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c. Jul 15 23:12:00.397775 containerd[1512]: time="2025-07-15T23:12:00.397719385Z" level=info msg="StartContainer for \"ffac64f6903d1f5670cba2367dc42b56f27b975e2300096842b4010e7499477c\" returns successfully" Jul 15 23:12:00.458868 kubelet[2593]: E0715 23:12:00.458620 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:00.461076 containerd[1512]: time="2025-07-15T23:12:00.461032235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8jbx8,Uid:fcfb0df1-61d3-4200-a067-53474974f802,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:00.465048 kubelet[2593]: E0715 23:12:00.464934 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:00.467918 containerd[1512]: time="2025-07-15T23:12:00.467467484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gmgx,Uid:2c9bd12d-10a1-448c-8839-313011ba9fc8,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:00.515074 containerd[1512]: time="2025-07-15T23:12:00.514982786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8jbx8,Uid:fcfb0df1-61d3-4200-a067-53474974f802,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d19a6d8f94c09ad51647518d92f44b3b3d7afd42af9accb018fd56ea05737f0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 15 23:12:00.515358 kubelet[2593]: E0715 23:12:00.515273 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d19a6d8f94c09ad51647518d92f44b3b3d7afd42af9accb018fd56ea05737f0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 15 23:12:00.515399 kubelet[2593]: E0715 23:12:00.515362 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d19a6d8f94c09ad51647518d92f44b3b3d7afd42af9accb018fd56ea05737f0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-8jbx8" Jul 15 23:12:00.515399 kubelet[2593]: E0715 23:12:00.515381 2593 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d19a6d8f94c09ad51647518d92f44b3b3d7afd42af9accb018fd56ea05737f0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-8jbx8" Jul 15 23:12:00.515775 kubelet[2593]: E0715 23:12:00.515428 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8jbx8_kube-system(fcfb0df1-61d3-4200-a067-53474974f802)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8jbx8_kube-system(fcfb0df1-61d3-4200-a067-53474974f802)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d19a6d8f94c09ad51647518d92f44b3b3d7afd42af9accb018fd56ea05737f0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-8jbx8" podUID="fcfb0df1-61d3-4200-a067-53474974f802" Jul 15 23:12:00.516519 containerd[1512]: time="2025-07-15T23:12:00.516471028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gmgx,Uid:2c9bd12d-10a1-448c-8839-313011ba9fc8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e770014534babc492f9160b3870d82af9753936d7110553ce3b342ef0adb9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 15 23:12:00.516791 kubelet[2593]: E0715 23:12:00.516653 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e770014534babc492f9160b3870d82af9753936d7110553ce3b342ef0adb9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 15 23:12:00.516791 kubelet[2593]: E0715 23:12:00.516704 2593 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e770014534babc492f9160b3870d82af9753936d7110553ce3b342ef0adb9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9gmgx" Jul 15 23:12:00.516791 kubelet[2593]: E0715 23:12:00.516721 2593 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e770014534babc492f9160b3870d82af9753936d7110553ce3b342ef0adb9ed\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9gmgx" Jul 15 23:12:00.516791 kubelet[2593]: E0715 23:12:00.516757 2593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9gmgx_kube-system(2c9bd12d-10a1-448c-8839-313011ba9fc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9gmgx_kube-system(2c9bd12d-10a1-448c-8839-313011ba9fc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e770014534babc492f9160b3870d82af9753936d7110553ce3b342ef0adb9ed\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-9gmgx" podUID="2c9bd12d-10a1-448c-8839-313011ba9fc8" Jul 15 23:12:00.518072 systemd[1]: run-netns-cni\x2d3f57db26\x2d8d29\x2d9d47\x2d99ec\x2d6780500d6cb0.mount: Deactivated successfully. Jul 15 23:12:00.518716 systemd[1]: run-netns-cni\x2d25b59f83\x2db229\x2d4d20\x2d4675\x2d8509ff62ca72.mount: Deactivated successfully. Jul 15 23:12:01.315016 kubelet[2593]: E0715 23:12:01.314936 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:01.481208 systemd-networkd[1435]: flannel.1: Link UP Jul 15 23:12:01.481213 systemd-networkd[1435]: flannel.1: Gained carrier Jul 15 23:12:02.316739 kubelet[2593]: E0715 23:12:02.316692 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:02.483717 update_engine[1485]: I20250715 23:12:02.483166 1485 update_attempter.cc:509] Updating boot flags... Jul 15 23:12:02.912228 systemd-networkd[1435]: flannel.1: Gained IPv6LL Jul 15 23:12:11.263558 kubelet[2593]: E0715 23:12:11.263498 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:11.263979 containerd[1512]: time="2025-07-15T23:12:11.263856226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gmgx,Uid:2c9bd12d-10a1-448c-8839-313011ba9fc8,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:11.282146 systemd-networkd[1435]: cni0: Link UP Jul 15 23:12:11.282152 systemd-networkd[1435]: cni0: Gained carrier Jul 15 23:12:11.285481 systemd-networkd[1435]: cni0: Lost carrier Jul 15 23:12:11.287468 systemd-networkd[1435]: veth6b0299bf: Link UP Jul 15 23:12:11.291050 kernel: cni0: port 1(veth6b0299bf) entered blocking state Jul 15 23:12:11.291111 kernel: cni0: port 1(veth6b0299bf) entered disabled state Jul 15 23:12:11.291128 kernel: veth6b0299bf: entered allmulticast mode Jul 15 23:12:11.293052 kernel: veth6b0299bf: entered promiscuous mode Jul 15 23:12:11.306041 kernel: cni0: port 1(veth6b0299bf) entered blocking state Jul 15 23:12:11.306104 kernel: cni0: port 1(veth6b0299bf) entered forwarding state Jul 15 23:12:11.305952 systemd-networkd[1435]: veth6b0299bf: Gained carrier Jul 15 23:12:11.306322 systemd-networkd[1435]: cni0: Gained carrier Jul 15 23:12:11.307941 containerd[1512]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001c938), "name":"cbr0", "type":"bridge"} Jul 15 23:12:11.307941 containerd[1512]: delegateAdd: netconf sent to delegate plugin: Jul 15 23:12:11.340528 containerd[1512]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-15T23:12:11.340489866Z" level=info msg="connecting to shim 5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab" address="unix:///run/containerd/s/aa0a9ed304b08de3f4573f43e3e4e231c26a1b62929750887b3d0ff537919942" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:11.368212 systemd[1]: Started cri-containerd-5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab.scope - libcontainer container 5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab. Jul 15 23:12:11.378747 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:12:11.406307 containerd[1512]: time="2025-07-15T23:12:11.406272081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gmgx,Uid:2c9bd12d-10a1-448c-8839-313011ba9fc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab\"" Jul 15 23:12:11.407087 kubelet[2593]: E0715 23:12:11.407056 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:11.410786 containerd[1512]: time="2025-07-15T23:12:11.410752408Z" level=info msg="CreateContainer within sandbox \"5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:12:11.420541 containerd[1512]: time="2025-07-15T23:12:11.420324574Z" level=info msg="Container 90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:11.423337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066376564.mount: Deactivated successfully. Jul 15 23:12:11.431158 containerd[1512]: time="2025-07-15T23:12:11.431123963Z" level=info msg="CreateContainer within sandbox \"5adb6ba6c3491059b950bcc6cff6bbb17b9bfb4e421c395e4824997d4b4903ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e\"" Jul 15 23:12:11.431921 containerd[1512]: time="2025-07-15T23:12:11.431892622Z" level=info msg="StartContainer for \"90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e\"" Jul 15 23:12:11.432687 containerd[1512]: time="2025-07-15T23:12:11.432664681Z" level=info msg="connecting to shim 90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e" address="unix:///run/containerd/s/aa0a9ed304b08de3f4573f43e3e4e231c26a1b62929750887b3d0ff537919942" protocol=ttrpc version=3 Jul 15 23:12:11.459204 systemd[1]: Started cri-containerd-90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e.scope - libcontainer container 90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e. Jul 15 23:12:11.485325 containerd[1512]: time="2025-07-15T23:12:11.485277894Z" level=info msg="StartContainer for \"90f18153687c937fb87fa501cb491de1f9ef2a0632e65d9f16bfb0c917a12c0e\" returns successfully" Jul 15 23:12:12.334568 kubelet[2593]: E0715 23:12:12.334525 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:12.347332 kubelet[2593]: I0715 23:12:12.347135 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cb5xh" podStartSLOduration=13.762252988 podStartE2EDuration="17.347115474s" podCreationTimestamp="2025-07-15 23:11:55 +0000 UTC" firstStartedPulling="2025-07-15 23:11:56.332831665 +0000 UTC m=+8.144401521" lastFinishedPulling="2025-07-15 23:11:59.917694111 +0000 UTC m=+11.729264007" observedRunningTime="2025-07-15 23:12:01.328313849 +0000 UTC m=+13.139883745" watchObservedRunningTime="2025-07-15 23:12:12.347115474 +0000 UTC m=+24.158685370" Jul 15 23:12:12.366344 kubelet[2593]: I0715 23:12:12.366002 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9gmgx" podStartSLOduration=17.36598372 podStartE2EDuration="17.36598372s" podCreationTimestamp="2025-07-15 23:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:12.347560441 +0000 UTC m=+24.159130337" watchObservedRunningTime="2025-07-15 23:12:12.36598372 +0000 UTC m=+24.177553616" Jul 15 23:12:12.960251 systemd-networkd[1435]: cni0: Gained IPv6LL Jul 15 23:12:13.088219 systemd-networkd[1435]: veth6b0299bf: Gained IPv6LL Jul 15 23:12:13.335859 kubelet[2593]: E0715 23:12:13.335699 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:14.340654 kubelet[2593]: E0715 23:12:14.340608 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:16.263830 kubelet[2593]: E0715 23:12:16.263631 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:16.264679 containerd[1512]: time="2025-07-15T23:12:16.264411614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8jbx8,Uid:fcfb0df1-61d3-4200-a067-53474974f802,Namespace:kube-system,Attempt:0,}" Jul 15 23:12:16.280816 systemd-networkd[1435]: vethff516962: Link UP Jul 15 23:12:16.283176 kernel: cni0: port 2(vethff516962) entered blocking state Jul 15 23:12:16.283227 kernel: cni0: port 2(vethff516962) entered disabled state Jul 15 23:12:16.283242 kernel: vethff516962: entered allmulticast mode Jul 15 23:12:16.284089 kernel: vethff516962: entered promiscuous mode Jul 15 23:12:16.288401 kernel: cni0: port 2(vethff516962) entered blocking state Jul 15 23:12:16.288459 kernel: cni0: port 2(vethff516962) entered forwarding state Jul 15 23:12:16.288511 systemd-networkd[1435]: vethff516962: Gained carrier Jul 15 23:12:16.290611 containerd[1512]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400011e8e8), "name":"cbr0", "type":"bridge"} Jul 15 23:12:16.290611 containerd[1512]: delegateAdd: netconf sent to delegate plugin: Jul 15 23:12:16.325569 containerd[1512]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-15T23:12:16.325522246Z" level=info msg="connecting to shim 8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef" address="unix:///run/containerd/s/583363cd2c27b86338e71633670c317204305e356d7b59ce754347fb335b80ae" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:12:16.347183 systemd[1]: Started cri-containerd-8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef.scope - libcontainer container 8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef. Jul 15 23:12:16.360233 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:12:16.382638 containerd[1512]: time="2025-07-15T23:12:16.382605628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8jbx8,Uid:fcfb0df1-61d3-4200-a067-53474974f802,Namespace:kube-system,Attempt:0,} returns sandbox id \"8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef\"" Jul 15 23:12:16.383555 kubelet[2593]: E0715 23:12:16.383367 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:16.386228 containerd[1512]: time="2025-07-15T23:12:16.386201463Z" level=info msg="CreateContainer within sandbox \"8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:12:16.409057 containerd[1512]: time="2025-07-15T23:12:16.408747016Z" level=info msg="Container da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:12:16.414255 containerd[1512]: time="2025-07-15T23:12:16.414215024Z" level=info msg="CreateContainer within sandbox \"8619695597cd370fd0812f623a9f4cc993c9b479a0552357e3d65e03bd43c8ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe\"" Jul 15 23:12:16.414965 containerd[1512]: time="2025-07-15T23:12:16.414920744Z" level=info msg="StartContainer for \"da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe\"" Jul 15 23:12:16.415883 containerd[1512]: time="2025-07-15T23:12:16.415844091Z" level=info msg="connecting to shim da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe" address="unix:///run/containerd/s/583363cd2c27b86338e71633670c317204305e356d7b59ce754347fb335b80ae" protocol=ttrpc version=3 Jul 15 23:12:16.448335 systemd[1]: Started cri-containerd-da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe.scope - libcontainer container da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe. Jul 15 23:12:16.451243 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:45728.service - OpenSSH per-connection server daemon (10.0.0.1:45728). Jul 15 23:12:16.484521 containerd[1512]: time="2025-07-15T23:12:16.484481853Z" level=info msg="StartContainer for \"da1d3016a31c0bed2e2e3c52ea41ad8c823e42b887e7e5237bdb790c632044fe\" returns successfully" Jul 15 23:12:16.497828 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 45728 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:16.499253 sshd-session[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:16.503927 systemd-logind[1483]: New session 6 of user core. Jul 15 23:12:16.509173 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:12:16.632887 sshd[3535]: Connection closed by 10.0.0.1 port 45728 Jul 15 23:12:16.633561 sshd-session[3511]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:16.637645 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:12:16.637864 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:45728.service: Deactivated successfully. Jul 15 23:12:16.639408 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:12:16.640579 systemd-logind[1483]: Removed session 6. Jul 15 23:12:17.351535 kubelet[2593]: E0715 23:12:17.351411 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:17.360859 kubelet[2593]: I0715 23:12:17.360805 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8jbx8" podStartSLOduration=22.360791425 podStartE2EDuration="22.360791425s" podCreationTimestamp="2025-07-15 23:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:12:17.360681871 +0000 UTC m=+29.172251767" watchObservedRunningTime="2025-07-15 23:12:17.360791425 +0000 UTC m=+29.172361321" Jul 15 23:12:17.377259 systemd-networkd[1435]: vethff516962: Gained IPv6LL Jul 15 23:12:18.359612 kubelet[2593]: E0715 23:12:18.359211 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:19.361201 kubelet[2593]: E0715 23:12:19.361164 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:12:21.647530 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:45730.service - OpenSSH per-connection server daemon (10.0.0.1:45730). Jul 15 23:12:21.696418 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 45730 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:21.699565 sshd-session[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:21.703553 systemd-logind[1483]: New session 7 of user core. Jul 15 23:12:21.719203 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:12:21.832363 sshd[3602]: Connection closed by 10.0.0.1 port 45730 Jul 15 23:12:21.832773 sshd-session[3600]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:21.837607 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:45730.service: Deactivated successfully. Jul 15 23:12:21.840287 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:12:21.841045 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:12:21.842767 systemd-logind[1483]: Removed session 7. Jul 15 23:12:26.855419 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:34606.service - OpenSSH per-connection server daemon (10.0.0.1:34606). Jul 15 23:12:26.920857 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 34606 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:26.922086 sshd-session[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:26.925792 systemd-logind[1483]: New session 8 of user core. Jul 15 23:12:26.933160 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:12:27.051414 sshd[3644]: Connection closed by 10.0.0.1 port 34606 Jul 15 23:12:27.051994 sshd-session[3642]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:27.066064 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:34606.service: Deactivated successfully. Jul 15 23:12:27.067769 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:12:27.068504 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:12:27.071246 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:34612.service - OpenSSH per-connection server daemon (10.0.0.1:34612). Jul 15 23:12:27.071951 systemd-logind[1483]: Removed session 8. Jul 15 23:12:27.136621 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 34612 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:27.137875 sshd-session[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:27.142527 systemd-logind[1483]: New session 9 of user core. Jul 15 23:12:27.151677 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:12:27.312067 sshd[3660]: Connection closed by 10.0.0.1 port 34612 Jul 15 23:12:27.311054 sshd-session[3658]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:27.322705 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:34612.service: Deactivated successfully. Jul 15 23:12:27.324351 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:12:27.330084 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:12:27.332820 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:34618.service - OpenSSH per-connection server daemon (10.0.0.1:34618). Jul 15 23:12:27.338360 systemd-logind[1483]: Removed session 9. Jul 15 23:12:27.383433 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:27.385097 sshd-session[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:27.388806 systemd-logind[1483]: New session 10 of user core. Jul 15 23:12:27.403158 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:12:27.515873 sshd[3675]: Connection closed by 10.0.0.1 port 34618 Jul 15 23:12:27.515043 sshd-session[3673]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:27.518204 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:34618.service: Deactivated successfully. Jul 15 23:12:27.519743 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:12:27.520472 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:12:27.521686 systemd-logind[1483]: Removed session 10. Jul 15 23:12:32.530672 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:34208.service - OpenSSH per-connection server daemon (10.0.0.1:34208). Jul 15 23:12:32.599638 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 34208 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:32.601007 sshd-session[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:32.604755 systemd-logind[1483]: New session 11 of user core. Jul 15 23:12:32.616233 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:12:32.733466 sshd[3712]: Connection closed by 10.0.0.1 port 34208 Jul 15 23:12:32.733406 sshd-session[3710]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:32.748375 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:34208.service: Deactivated successfully. Jul 15 23:12:32.749917 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:12:32.751897 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:12:32.753261 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:34220.service - OpenSSH per-connection server daemon (10.0.0.1:34220). Jul 15 23:12:32.754336 systemd-logind[1483]: Removed session 11. Jul 15 23:12:32.816527 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 34220 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:32.818243 sshd-session[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:32.823284 systemd-logind[1483]: New session 12 of user core. Jul 15 23:12:32.832224 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:12:33.050511 sshd[3727]: Connection closed by 10.0.0.1 port 34220 Jul 15 23:12:33.051092 sshd-session[3725]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:33.061225 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:34220.service: Deactivated successfully. Jul 15 23:12:33.062945 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:12:33.063717 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:12:33.066439 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:34230.service - OpenSSH per-connection server daemon (10.0.0.1:34230). Jul 15 23:12:33.067231 systemd-logind[1483]: Removed session 12. Jul 15 23:12:33.127946 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 34230 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:33.129327 sshd-session[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:33.134443 systemd-logind[1483]: New session 13 of user core. Jul 15 23:12:33.151244 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:12:33.768054 sshd[3741]: Connection closed by 10.0.0.1 port 34230 Jul 15 23:12:33.768298 sshd-session[3739]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:33.782454 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:34230.service: Deactivated successfully. Jul 15 23:12:33.785457 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:12:33.787912 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:12:33.793510 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). Jul 15 23:12:33.795040 systemd-logind[1483]: Removed session 13. Jul 15 23:12:33.850956 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:33.852303 sshd-session[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:33.857452 systemd-logind[1483]: New session 14 of user core. Jul 15 23:12:33.869223 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:12:34.089673 sshd[3762]: Connection closed by 10.0.0.1 port 34238 Jul 15 23:12:34.090761 sshd-session[3760]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:34.103401 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:34238.service: Deactivated successfully. Jul 15 23:12:34.106217 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:12:34.106985 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:12:34.111936 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:34242.service - OpenSSH per-connection server daemon (10.0.0.1:34242). Jul 15 23:12:34.112455 systemd-logind[1483]: Removed session 14. Jul 15 23:12:34.169532 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 34242 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:34.171142 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:34.175016 systemd-logind[1483]: New session 15 of user core. Jul 15 23:12:34.182175 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:12:34.300517 sshd[3775]: Connection closed by 10.0.0.1 port 34242 Jul 15 23:12:34.300003 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:34.304594 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:34242.service: Deactivated successfully. Jul 15 23:12:34.307823 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:12:34.308948 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:12:34.310547 systemd-logind[1483]: Removed session 15. Jul 15 23:12:39.323004 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:34246.service - OpenSSH per-connection server daemon (10.0.0.1:34246). Jul 15 23:12:39.377062 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 34246 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:39.378283 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:39.382784 systemd-logind[1483]: New session 16 of user core. Jul 15 23:12:39.397196 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:12:39.506729 sshd[3813]: Connection closed by 10.0.0.1 port 34246 Jul 15 23:12:39.507210 sshd-session[3811]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:39.510704 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:34246.service: Deactivated successfully. Jul 15 23:12:39.512336 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:12:39.512986 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:12:39.514434 systemd-logind[1483]: Removed session 16. Jul 15 23:12:44.522238 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:38384.service - OpenSSH per-connection server daemon (10.0.0.1:38384). Jul 15 23:12:44.566400 sshd[3848]: Accepted publickey for core from 10.0.0.1 port 38384 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:44.567723 sshd-session[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:44.572041 systemd-logind[1483]: New session 17 of user core. Jul 15 23:12:44.583254 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:12:44.692617 sshd[3850]: Connection closed by 10.0.0.1 port 38384 Jul 15 23:12:44.693318 sshd-session[3848]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:44.695942 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:38384.service: Deactivated successfully. Jul 15 23:12:44.697645 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:12:44.698941 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:12:44.700555 systemd-logind[1483]: Removed session 17. Jul 15 23:12:49.704458 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:38400.service - OpenSSH per-connection server daemon (10.0.0.1:38400). Jul 15 23:12:49.748621 sshd[3886]: Accepted publickey for core from 10.0.0.1 port 38400 ssh2: RSA SHA256:kQgIj/u2uRws2541HrBKcbKigurdZKttprPWjhBFFCE Jul 15 23:12:49.749783 sshd-session[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:49.755347 systemd-logind[1483]: New session 18 of user core. Jul 15 23:12:49.765200 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:12:49.885081 sshd[3888]: Connection closed by 10.0.0.1 port 38400 Jul 15 23:12:49.885580 sshd-session[3886]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:49.890307 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:38400.service: Deactivated successfully. Jul 15 23:12:49.892221 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:12:49.893858 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:12:49.896159 systemd-logind[1483]: Removed session 18.