May 9 00:13:54.906550 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 00:13:54.906572 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu May 8 22:43:24 -00 2025 May 9 00:13:54.906582 kernel: KASLR enabled May 9 00:13:54.906587 kernel: efi: EFI v2.7 by EDK II May 9 00:13:54.906593 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 9 00:13:54.906598 kernel: random: crng init done May 9 00:13:54.906605 kernel: ACPI: Early table checksum verification disabled May 9 00:13:54.906611 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 9 00:13:54.906617 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 00:13:54.906625 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906631 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906636 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906642 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906648 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906656 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906663 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906670 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906676 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:13:54.906682 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 00:13:54.906689 kernel: NUMA: Failed to initialise from firmware May 9 00:13:54.906695 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:13:54.906701 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 9 00:13:54.906707 kernel: Zone ranges: May 9 00:13:54.906714 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:13:54.906720 kernel: DMA32 empty May 9 00:13:54.906727 kernel: Normal empty May 9 00:13:54.906733 kernel: Movable zone start for each node May 9 00:13:54.906739 kernel: Early memory node ranges May 9 00:13:54.906746 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 00:13:54.906752 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 00:13:54.906758 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 00:13:54.906764 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 00:13:54.906770 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 00:13:54.906777 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 00:13:54.906783 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 00:13:54.906789 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:13:54.906795 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 00:13:54.906803 kernel: psci: probing for conduit method from ACPI. May 9 00:13:54.906809 kernel: psci: PSCIv1.1 detected in firmware. May 9 00:13:54.906815 kernel: psci: Using standard PSCI v0.2 function IDs May 9 00:13:54.906824 kernel: psci: Trusted OS migration not required May 9 00:13:54.906831 kernel: psci: SMC Calling Convention v1.1 May 9 00:13:54.906838 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 00:13:54.906918 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 00:13:54.906928 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 00:13:54.906935 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 00:13:54.906941 kernel: Detected PIPT I-cache on CPU0 May 9 00:13:54.906948 kernel: CPU features: detected: GIC system register CPU interface May 9 00:13:54.906955 kernel: CPU features: detected: Hardware dirty bit management May 9 00:13:54.906961 kernel: CPU features: detected: Spectre-v4 May 9 00:13:54.906968 kernel: CPU features: detected: Spectre-BHB May 9 00:13:54.906974 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 00:13:54.906981 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 00:13:54.906990 kernel: CPU features: detected: ARM erratum 1418040 May 9 00:13:54.906997 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 00:13:54.907004 kernel: alternatives: applying boot alternatives May 9 00:13:54.907011 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:13:54.907018 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:13:54.907025 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:13:54.907032 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:13:54.907039 kernel: Fallback order for Node 0: 0 May 9 00:13:54.907045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 00:13:54.907052 kernel: Policy zone: DMA May 9 00:13:54.907059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:13:54.907066 kernel: software IO TLB: area num 4. May 9 00:13:54.907073 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 00:13:54.907080 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) May 9 00:13:54.907087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:13:54.907093 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:13:54.907101 kernel: rcu: RCU event tracing is enabled. May 9 00:13:54.907107 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:13:54.907114 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:13:54.907121 kernel: Tracing variant of Tasks RCU enabled. May 9 00:13:54.907127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:13:54.907134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:13:54.907141 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 00:13:54.907148 kernel: GICv3: 256 SPIs implemented May 9 00:13:54.907155 kernel: GICv3: 0 Extended SPIs implemented May 9 00:13:54.907161 kernel: Root IRQ handler: gic_handle_irq May 9 00:13:54.907168 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 00:13:54.907175 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 00:13:54.907182 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 00:13:54.907188 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 00:13:54.907195 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 00:13:54.907202 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 00:13:54.907209 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 00:13:54.907215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:13:54.907223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:13:54.907238 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 00:13:54.907245 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 00:13:54.907252 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 00:13:54.907259 kernel: arm-pv: using stolen time PV May 9 00:13:54.907266 kernel: Console: colour dummy device 80x25 May 9 00:13:54.907272 kernel: ACPI: Core revision 20230628 May 9 00:13:54.907280 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 00:13:54.907286 kernel: pid_max: default: 32768 minimum: 301 May 9 00:13:54.907293 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:13:54.907302 kernel: landlock: Up and running. May 9 00:13:54.907309 kernel: SELinux: Initializing. May 9 00:13:54.907315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:13:54.907322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:13:54.907329 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 00:13:54.907336 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:13:54.907343 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:13:54.907350 kernel: rcu: Hierarchical SRCU implementation. May 9 00:13:54.907357 kernel: rcu: Max phase no-delay instances is 400. May 9 00:13:54.907364 kernel: Platform MSI: ITS@0x8080000 domain created May 9 00:13:54.907371 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 00:13:54.907378 kernel: Remapping and enabling EFI services. May 9 00:13:54.907385 kernel: smp: Bringing up secondary CPUs ... May 9 00:13:54.907391 kernel: Detected PIPT I-cache on CPU1 May 9 00:13:54.907398 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 00:13:54.907405 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 00:13:54.907412 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:13:54.907419 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 00:13:54.907426 kernel: Detected PIPT I-cache on CPU2 May 9 00:13:54.907434 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 00:13:54.907441 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 00:13:54.907452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:13:54.907460 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 00:13:54.907467 kernel: Detected PIPT I-cache on CPU3 May 9 00:13:54.907475 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 00:13:54.907482 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 00:13:54.907489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:13:54.907496 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 00:13:54.907503 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:13:54.907511 kernel: SMP: Total of 4 processors activated. May 9 00:13:54.907519 kernel: CPU features: detected: 32-bit EL0 Support May 9 00:13:54.907526 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 00:13:54.907533 kernel: CPU features: detected: Common not Private translations May 9 00:13:54.907541 kernel: CPU features: detected: CRC32 instructions May 9 00:13:54.907548 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 00:13:54.907556 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 00:13:54.907563 kernel: CPU features: detected: LSE atomic instructions May 9 00:13:54.907571 kernel: CPU features: detected: Privileged Access Never May 9 00:13:54.907578 kernel: CPU features: detected: RAS Extension Support May 9 00:13:54.907585 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 00:13:54.907592 kernel: CPU: All CPU(s) started at EL1 May 9 00:13:54.907599 kernel: alternatives: applying system-wide alternatives May 9 00:13:54.907606 kernel: devtmpfs: initialized May 9 00:13:54.907613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:13:54.907621 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:13:54.907629 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:13:54.907636 kernel: SMBIOS 3.0.0 present. May 9 00:13:54.907644 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 9 00:13:54.907651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:13:54.907658 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 00:13:54.907665 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 00:13:54.907673 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 00:13:54.907680 kernel: audit: initializing netlink subsys (disabled) May 9 00:13:54.907687 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 May 9 00:13:54.907695 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:13:54.907703 kernel: cpuidle: using governor menu May 9 00:13:54.907710 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 00:13:54.907717 kernel: ASID allocator initialised with 32768 entries May 9 00:13:54.907724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:13:54.907732 kernel: Serial: AMBA PL011 UART driver May 9 00:13:54.907739 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 00:13:54.907746 kernel: Modules: 0 pages in range for non-PLT usage May 9 00:13:54.907753 kernel: Modules: 509008 pages in range for PLT usage May 9 00:13:54.907761 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:13:54.907769 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:13:54.907776 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 00:13:54.907783 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 00:13:54.907790 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:13:54.907797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:13:54.907805 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 00:13:54.907812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 00:13:54.907819 kernel: ACPI: Added _OSI(Module Device) May 9 00:13:54.907827 kernel: ACPI: Added _OSI(Processor Device) May 9 00:13:54.907834 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:13:54.907841 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:13:54.907855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:13:54.907863 kernel: ACPI: Interpreter enabled May 9 00:13:54.907870 kernel: ACPI: Using GIC for interrupt routing May 9 00:13:54.907877 kernel: ACPI: MCFG table detected, 1 entries May 9 00:13:54.907884 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 00:13:54.907891 kernel: printk: console [ttyAMA0] enabled May 9 00:13:54.907901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:13:54.908037 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:13:54.908110 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 00:13:54.908177 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 00:13:54.908252 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 00:13:54.908318 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 00:13:54.908328 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 00:13:54.908338 kernel: PCI host bridge to bus 0000:00 May 9 00:13:54.908407 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 00:13:54.908465 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 00:13:54.908522 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 00:13:54.908577 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:13:54.908658 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 00:13:54.908732 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:13:54.908800 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 00:13:54.908896 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 00:13:54.908964 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:13:54.909028 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:13:54.909091 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 00:13:54.909155 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 00:13:54.909213 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 00:13:54.909282 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 00:13:54.909340 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 00:13:54.909349 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 00:13:54.909356 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 00:13:54.909364 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 00:13:54.909371 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 00:13:54.909378 kernel: iommu: Default domain type: Translated May 9 00:13:54.909385 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 00:13:54.909395 kernel: efivars: Registered efivars operations May 9 00:13:54.909402 kernel: vgaarb: loaded May 9 00:13:54.909409 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 00:13:54.909416 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:13:54.909424 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:13:54.909431 kernel: pnp: PnP ACPI init May 9 00:13:54.909499 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 00:13:54.909509 kernel: pnp: PnP ACPI: found 1 devices May 9 00:13:54.909518 kernel: NET: Registered PF_INET protocol family May 9 00:13:54.909526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:13:54.909533 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:13:54.909541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:13:54.909548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:13:54.909555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:13:54.909562 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:13:54.909570 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:13:54.909577 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:13:54.909585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:13:54.909592 kernel: PCI: CLS 0 bytes, default 64 May 9 00:13:54.909599 kernel: kvm [1]: HYP mode not available May 9 00:13:54.909606 kernel: Initialise system trusted keyrings May 9 00:13:54.909614 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:13:54.909621 kernel: Key type asymmetric registered May 9 00:13:54.909628 kernel: Asymmetric key parser 'x509' registered May 9 00:13:54.909635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 00:13:54.909642 kernel: io scheduler mq-deadline registered May 9 00:13:54.909650 kernel: io scheduler kyber registered May 9 00:13:54.909657 kernel: io scheduler bfq registered May 9 00:13:54.909665 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 00:13:54.909672 kernel: ACPI: button: Power Button [PWRB] May 9 00:13:54.909679 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 00:13:54.909744 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 00:13:54.909754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:13:54.909761 kernel: thunder_xcv, ver 1.0 May 9 00:13:54.909768 kernel: thunder_bgx, ver 1.0 May 9 00:13:54.909777 kernel: nicpf, ver 1.0 May 9 00:13:54.909784 kernel: nicvf, ver 1.0 May 9 00:13:54.909875 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 00:13:54.909943 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T00:13:54 UTC (1746749634) May 9 00:13:54.909953 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 00:13:54.909961 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 00:13:54.909968 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 00:13:54.909976 kernel: watchdog: Hard watchdog permanently disabled May 9 00:13:54.909986 kernel: NET: Registered PF_INET6 protocol family May 9 00:13:54.909993 kernel: Segment Routing with IPv6 May 9 00:13:54.910000 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:13:54.910008 kernel: NET: Registered PF_PACKET protocol family May 9 00:13:54.910015 kernel: Key type dns_resolver registered May 9 00:13:54.910022 kernel: registered taskstats version 1 May 9 00:13:54.910029 kernel: Loading compiled-in X.509 certificates May 9 00:13:54.910037 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7944e0e0bec5e8cad487856da19569eba337cea0' May 9 00:13:54.910044 kernel: Key type .fscrypt registered May 9 00:13:54.910052 kernel: Key type fscrypt-provisioning registered May 9 00:13:54.910059 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:13:54.910067 kernel: ima: Allocated hash algorithm: sha1 May 9 00:13:54.910074 kernel: ima: No architecture policies found May 9 00:13:54.910081 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 00:13:54.910088 kernel: clk: Disabling unused clocks May 9 00:13:54.910095 kernel: Freeing unused kernel memory: 39424K May 9 00:13:54.910102 kernel: Run /init as init process May 9 00:13:54.910109 kernel: with arguments: May 9 00:13:54.910117 kernel: /init May 9 00:13:54.910124 kernel: with environment: May 9 00:13:54.910132 kernel: HOME=/ May 9 00:13:54.910139 kernel: TERM=linux May 9 00:13:54.910146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:13:54.910155 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:13:54.910164 systemd[1]: Detected virtualization kvm. May 9 00:13:54.910172 systemd[1]: Detected architecture arm64. May 9 00:13:54.910181 systemd[1]: Running in initrd. May 9 00:13:54.910188 systemd[1]: No hostname configured, using default hostname. May 9 00:13:54.910196 systemd[1]: Hostname set to . May 9 00:13:54.910204 systemd[1]: Initializing machine ID from VM UUID. May 9 00:13:54.910211 systemd[1]: Queued start job for default target initrd.target. May 9 00:13:54.910219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:13:54.910232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:13:54.910241 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:13:54.910251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:13:54.910259 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:13:54.910267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:13:54.910276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:13:54.910284 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:13:54.910292 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:13:54.910301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:13:54.910309 systemd[1]: Reached target paths.target - Path Units. May 9 00:13:54.910317 systemd[1]: Reached target slices.target - Slice Units. May 9 00:13:54.910324 systemd[1]: Reached target swap.target - Swaps. May 9 00:13:54.910332 systemd[1]: Reached target timers.target - Timer Units. May 9 00:13:54.910340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:13:54.910347 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:13:54.910355 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:13:54.910363 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:13:54.910372 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:13:54.910380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:13:54.910388 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:13:54.910396 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:13:54.910403 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:13:54.910411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:13:54.910419 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:13:54.910426 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:13:54.910434 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:13:54.910444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:13:54.910451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:13:54.910459 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:13:54.910467 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:13:54.910474 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:13:54.910483 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:13:54.910510 systemd-journald[238]: Collecting audit messages is disabled. May 9 00:13:54.910529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:13:54.910539 systemd-journald[238]: Journal started May 9 00:13:54.910558 systemd-journald[238]: Runtime Journal (/run/log/journal/db0255f44b664e5993147f748c1ccbdb) is 5.9M, max 47.3M, 41.4M free. May 9 00:13:54.901569 systemd-modules-load[239]: Inserted module 'overlay' May 9 00:13:54.913864 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:13:54.917208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:13:54.917233 kernel: Bridge firewalling registered May 9 00:13:54.917742 systemd-modules-load[239]: Inserted module 'br_netfilter' May 9 00:13:54.919543 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:13:54.920842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:13:54.922082 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:13:54.931063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:13:54.933074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:13:54.935423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:13:54.939687 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:13:54.945048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:13:54.948567 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:13:54.951148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:13:54.952520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:13:54.957743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:13:54.961897 dracut-cmdline[273]: dracut-dracut-053 May 9 00:13:54.964550 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:13:54.992635 systemd-resolved[280]: Positive Trust Anchors: May 9 00:13:54.992652 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:13:54.992682 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:13:54.997490 systemd-resolved[280]: Defaulting to hostname 'linux'. May 9 00:13:54.998480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:13:55.002441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:13:55.034882 kernel: SCSI subsystem initialized May 9 00:13:55.039879 kernel: Loading iSCSI transport class v2.0-870. May 9 00:13:55.046874 kernel: iscsi: registered transport (tcp) May 9 00:13:55.059865 kernel: iscsi: registered transport (qla4xxx) May 9 00:13:55.059881 kernel: QLogic iSCSI HBA Driver May 9 00:13:55.101916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:13:55.112992 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:13:55.129309 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:13:55.129355 kernel: device-mapper: uevent: version 1.0.3 May 9 00:13:55.130978 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:13:55.177902 kernel: raid6: neonx8 gen() 15786 MB/s May 9 00:13:55.194872 kernel: raid6: neonx4 gen() 15640 MB/s May 9 00:13:55.211880 kernel: raid6: neonx2 gen() 13234 MB/s May 9 00:13:55.228879 kernel: raid6: neonx1 gen() 10467 MB/s May 9 00:13:55.245885 kernel: raid6: int64x8 gen() 6956 MB/s May 9 00:13:55.262879 kernel: raid6: int64x4 gen() 7343 MB/s May 9 00:13:55.279883 kernel: raid6: int64x2 gen() 6123 MB/s May 9 00:13:55.296965 kernel: raid6: int64x1 gen() 5052 MB/s May 9 00:13:55.296990 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s May 9 00:13:55.314965 kernel: raid6: .... xor() 11920 MB/s, rmw enabled May 9 00:13:55.314984 kernel: raid6: using neon recovery algorithm May 9 00:13:55.320339 kernel: xor: measuring software checksum speed May 9 00:13:55.320377 kernel: 8regs : 19735 MB/sec May 9 00:13:55.320997 kernel: 32regs : 19636 MB/sec May 9 00:13:55.322252 kernel: arm64_neon : 26892 MB/sec May 9 00:13:55.322265 kernel: xor: using function: arm64_neon (26892 MB/sec) May 9 00:13:55.371878 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:13:55.382308 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:13:55.395000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:13:55.406411 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 9 00:13:55.409528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:13:55.428283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:13:55.440443 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 9 00:13:55.472785 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:13:55.485077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:13:55.524811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:13:55.535375 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:13:55.545366 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:13:55.547175 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:13:55.549058 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:13:55.550411 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:13:55.560004 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:13:55.569778 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:13:55.577895 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 00:13:55.582258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:13:55.585589 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:13:55.582333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:13:55.585489 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:13:55.586612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:13:55.595672 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:13:55.595697 kernel: GPT:9289727 != 19775487 May 9 00:13:55.595706 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:13:55.595715 kernel: GPT:9289727 != 19775487 May 9 00:13:55.595724 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:13:55.595733 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:13:55.586674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:13:55.588958 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:13:55.604028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:13:55.611885 kernel: BTRFS: device fsid 9a510efc-c158-4845-bfb8-279f8b20070f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (514) May 9 00:13:55.617866 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (520) May 9 00:13:55.619344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:13:55.621859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:13:55.629568 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:13:55.633554 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:13:55.634865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:13:55.641554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:13:55.653030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:13:55.655394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:13:55.660146 disk-uuid[551]: Primary Header is updated. May 9 00:13:55.660146 disk-uuid[551]: Secondary Entries is updated. May 9 00:13:55.660146 disk-uuid[551]: Secondary Header is updated. May 9 00:13:55.665874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:13:55.684355 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:13:56.674814 disk-uuid[552]: The operation has completed successfully. May 9 00:13:56.675861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:13:56.702672 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:13:56.702768 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:13:56.719027 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:13:56.721738 sh[576]: Success May 9 00:13:56.734885 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 00:13:56.762843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:13:56.783198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:13:56.785302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:13:56.794312 kernel: BTRFS info (device dm-0): first mount of filesystem 9a510efc-c158-4845-bfb8-279f8b20070f May 9 00:13:56.794348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 00:13:56.794359 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:13:56.796135 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:13:56.796155 kernel: BTRFS info (device dm-0): using free space tree May 9 00:13:56.799928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:13:56.801178 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:13:56.810972 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:13:56.813143 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:13:56.819507 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:13:56.819550 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:13:56.819566 kernel: BTRFS info (device vda6): using free space tree May 9 00:13:56.821871 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:13:56.829734 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:13:56.831060 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:13:56.836182 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:13:56.844074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:13:56.907894 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:13:56.918002 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:13:56.939743 systemd-networkd[766]: lo: Link UP May 9 00:13:56.939756 systemd-networkd[766]: lo: Gained carrier May 9 00:13:56.940666 systemd-networkd[766]: Enumeration completed May 9 00:13:56.941133 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:13:56.943377 ignition[667]: Ignition 2.19.0 May 9 00:13:56.942444 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:13:56.943384 ignition[667]: Stage: fetch-offline May 9 00:13:56.942447 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:13:56.943417 ignition[667]: no configs at "/usr/lib/ignition/base.d" May 9 00:13:56.943907 systemd[1]: Reached target network.target - Network. May 9 00:13:56.943425 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:56.945251 systemd-networkd[766]: eth0: Link UP May 9 00:13:56.943624 ignition[667]: parsed url from cmdline: "" May 9 00:13:56.945255 systemd-networkd[766]: eth0: Gained carrier May 9 00:13:56.943627 ignition[667]: no config URL provided May 9 00:13:56.945261 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:13:56.943632 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:13:56.943638 ignition[667]: no config at "/usr/lib/ignition/user.ign" May 9 00:13:56.943659 ignition[667]: op(1): [started] loading QEMU firmware config module May 9 00:13:56.943663 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:13:56.953466 ignition[667]: op(1): [finished] loading QEMU firmware config module May 9 00:13:56.962894 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:13:56.980168 ignition[667]: parsing config with SHA512: e416d57bd3b8db8bcb63022c8ac3ef11c880414cacb8210abaedfbe0e936a009a14b888bf14a3c34202e5043a78c0172f72df2be6f0b0161d8c1ea71d2a2f255 May 9 00:13:56.984011 unknown[667]: fetched base config from "system" May 9 00:13:56.984019 unknown[667]: fetched user config from "qemu" May 9 00:13:56.985556 ignition[667]: fetch-offline: fetch-offline passed May 9 00:13:56.986189 ignition[667]: Ignition finished successfully May 9 00:13:56.987598 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:13:56.988992 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:13:56.997002 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:13:57.006528 ignition[772]: Ignition 2.19.0 May 9 00:13:57.006537 ignition[772]: Stage: kargs May 9 00:13:57.006685 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 9 00:13:57.006694 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:57.007531 ignition[772]: kargs: kargs passed May 9 00:13:57.010424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:13:57.007570 ignition[772]: Ignition finished successfully May 9 00:13:57.019990 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:13:57.029566 ignition[779]: Ignition 2.19.0 May 9 00:13:57.029576 ignition[779]: Stage: disks May 9 00:13:57.029724 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 00:13:57.032213 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:13:57.029733 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:57.033770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:13:57.030545 ignition[779]: disks: disks passed May 9 00:13:57.035469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:13:57.030587 ignition[779]: Ignition finished successfully May 9 00:13:57.037441 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:13:57.039236 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:13:57.040675 systemd[1]: Reached target basic.target - Basic System. May 9 00:13:57.056015 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:13:57.065813 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:13:57.072701 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:13:57.096972 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:13:57.136863 kernel: EXT4-fs (vda9): mounted filesystem 1a8c7c5d-87ec-4bc4-aa01-1ebc1d3c20e7 r/w with ordered data mode. Quota mode: none. May 9 00:13:57.137669 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:13:57.138935 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:13:57.153976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:13:57.155763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:13:57.156966 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:13:57.157060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:13:57.164696 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) May 9 00:13:57.157086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:13:57.169558 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:13:57.169576 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:13:57.169586 kernel: BTRFS info (device vda6): using free space tree May 9 00:13:57.169596 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:13:57.161335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:13:57.163347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:13:57.172691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:13:57.205947 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:13:57.209844 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 9 00:13:57.213894 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:13:57.217691 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:13:57.284186 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:13:57.292954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:13:57.295214 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:13:57.299858 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:13:57.314488 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:13:57.319087 ignition[914]: INFO : Ignition 2.19.0 May 9 00:13:57.319087 ignition[914]: INFO : Stage: mount May 9 00:13:57.320604 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:13:57.320604 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:57.320604 ignition[914]: INFO : mount: mount passed May 9 00:13:57.320604 ignition[914]: INFO : Ignition finished successfully May 9 00:13:57.323878 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:13:57.336973 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:13:57.793299 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:13:57.804057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:13:57.810675 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) May 9 00:13:57.810705 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:13:57.810716 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:13:57.812406 kernel: BTRFS info (device vda6): using free space tree May 9 00:13:57.814870 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:13:57.815711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:13:57.831948 ignition[944]: INFO : Ignition 2.19.0 May 9 00:13:57.831948 ignition[944]: INFO : Stage: files May 9 00:13:57.833754 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:13:57.833754 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:57.833754 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 9 00:13:57.837432 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:13:57.837432 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:13:57.837432 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:13:57.837432 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:13:57.837432 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:13:57.837432 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:13:57.836154 unknown[944]: wrote ssh authorized keys file for user: core May 9 00:13:57.846996 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 00:13:57.879811 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:13:58.049162 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:13:58.049162 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:13:58.053040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 9 00:13:58.377335 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 00:13:58.442057 systemd-networkd[766]: eth0: Gained IPv6LL May 9 00:13:58.811425 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:13:58.811425 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 9 00:13:58.815037 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:13:58.835181 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:13:58.838327 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:13:58.839967 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:13:58.839967 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 9 00:13:58.839967 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:13:58.839967 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:13:58.839967 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:13:58.839967 ignition[944]: INFO : files: files passed May 9 00:13:58.839967 ignition[944]: INFO : Ignition finished successfully May 9 00:13:58.840429 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:13:58.848227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:13:58.850092 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:13:58.853916 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:13:58.854000 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:13:58.858377 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:13:58.860111 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:13:58.860111 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:13:58.863412 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:13:58.862729 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:13:58.864938 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:13:58.874214 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:13:58.892065 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:13:58.892164 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:13:58.894259 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:13:58.896059 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:13:58.897816 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:13:58.898516 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:13:58.913816 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:13:58.922038 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:13:58.930520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:13:58.931725 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:13:58.933753 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:13:58.935521 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:13:58.935630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:13:58.938038 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:13:58.939070 systemd[1]: Stopped target basic.target - Basic System. May 9 00:13:58.940795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:13:58.942600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:13:58.944376 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:13:58.946285 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:13:58.948143 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:13:58.950107 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:13:58.951842 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:13:58.953803 systemd[1]: Stopped target swap.target - Swaps. May 9 00:13:58.955366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:13:58.955482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:13:58.957893 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:13:58.959795 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:13:58.961713 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:13:58.965899 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:13:58.967145 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:13:58.967264 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:13:58.970004 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:13:58.970110 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:13:58.972078 systemd[1]: Stopped target paths.target - Path Units. May 9 00:13:58.973645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:13:58.977916 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:13:58.979145 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:13:58.981273 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:13:58.982793 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:13:58.982892 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:13:58.984454 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:13:58.984531 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:13:58.986058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:13:58.986163 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:13:58.987936 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:13:58.988034 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:13:59.001005 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:13:59.002494 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:13:59.003510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:13:59.003629 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:13:59.005528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:13:59.005625 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:13:59.011197 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:13:59.012403 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:13:59.015830 ignition[999]: INFO : Ignition 2.19.0 May 9 00:13:59.015830 ignition[999]: INFO : Stage: umount May 9 00:13:59.015830 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:13:59.015830 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:13:59.015830 ignition[999]: INFO : umount: umount passed May 9 00:13:59.015830 ignition[999]: INFO : Ignition finished successfully May 9 00:13:59.016172 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:13:59.016269 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:13:59.020695 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:13:59.021121 systemd[1]: Stopped target network.target - Network. May 9 00:13:59.024279 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:13:59.024342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:13:59.026317 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:13:59.026366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:13:59.030080 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:13:59.030127 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:13:59.031715 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:13:59.031759 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:13:59.034990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:13:59.036912 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:13:59.045070 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:13:59.046883 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:13:59.049136 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:13:59.049183 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:13:59.051314 systemd-networkd[766]: eth0: DHCPv6 lease lost May 9 00:13:59.053530 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:13:59.053625 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:13:59.055121 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:13:59.055153 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:13:59.073993 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:13:59.075013 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:13:59.075089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:13:59.077292 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:13:59.077337 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:13:59.078363 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:13:59.078405 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:13:59.081689 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:13:59.094049 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:13:59.095148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:13:59.096731 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:13:59.096815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:13:59.098383 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:13:59.098454 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:13:59.103171 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:13:59.103320 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:13:59.105457 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:13:59.105493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:13:59.107260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:13:59.107290 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:13:59.108996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:13:59.109039 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:13:59.111657 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:13:59.111700 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:13:59.114429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:13:59.114472 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:13:59.126992 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:13:59.128039 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:13:59.128099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:13:59.130254 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:13:59.130302 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:13:59.132327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:13:59.132370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:13:59.134583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:13:59.134629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:13:59.136909 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:13:59.137002 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:13:59.139382 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:13:59.141734 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:13:59.151681 systemd[1]: Switching root. May 9 00:13:59.177981 systemd-journald[238]: Journal stopped May 9 00:13:59.888533 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 9 00:13:59.888598 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:13:59.888617 kernel: SELinux: policy capability open_perms=1 May 9 00:13:59.888627 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:13:59.888639 kernel: SELinux: policy capability always_check_network=0 May 9 00:13:59.888649 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:13:59.888660 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:13:59.888670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:13:59.888680 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:13:59.888690 kernel: audit: type=1403 audit(1746749639.320:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:13:59.888703 systemd[1]: Successfully loaded SELinux policy in 32.565ms. May 9 00:13:59.888721 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.046ms. May 9 00:13:59.888733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:13:59.888745 systemd[1]: Detected virtualization kvm. May 9 00:13:59.888757 systemd[1]: Detected architecture arm64. May 9 00:13:59.888768 systemd[1]: Detected first boot. May 9 00:13:59.888780 systemd[1]: Initializing machine ID from VM UUID. May 9 00:13:59.888791 zram_generator::config[1044]: No configuration found. May 9 00:13:59.888804 systemd[1]: Populated /etc with preset unit settings. May 9 00:13:59.888815 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:13:59.888827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:13:59.888838 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:13:59.888861 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:13:59.888874 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:13:59.888891 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:13:59.888905 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:13:59.888917 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:13:59.888933 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:13:59.888945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:13:59.888956 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:13:59.888967 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:13:59.888979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:13:59.888990 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:13:59.889001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:13:59.889012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:13:59.889025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:13:59.889037 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 00:13:59.889048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:13:59.889059 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:13:59.889070 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:13:59.889082 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:13:59.889093 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:13:59.889104 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:13:59.889116 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:13:59.889128 systemd[1]: Reached target slices.target - Slice Units. May 9 00:13:59.889140 systemd[1]: Reached target swap.target - Swaps. May 9 00:13:59.889151 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:13:59.889166 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:13:59.889178 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:13:59.889189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:13:59.889200 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:13:59.889211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:13:59.889224 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:13:59.889239 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:13:59.889254 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:13:59.889266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:13:59.889278 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:13:59.889289 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:13:59.889301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:13:59.889312 systemd[1]: Reached target machines.target - Containers. May 9 00:13:59.889323 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:13:59.889337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:13:59.889348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:13:59.889360 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:13:59.889371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:13:59.889382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:13:59.889394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:13:59.889405 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:13:59.889417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:13:59.889431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:13:59.889443 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:13:59.889455 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:13:59.889466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:13:59.889478 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:13:59.889489 kernel: fuse: init (API version 7.39) May 9 00:13:59.889500 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:13:59.889511 kernel: loop: module loaded May 9 00:13:59.889522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:13:59.889534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:13:59.889551 kernel: ACPI: bus type drm_connector registered May 9 00:13:59.889581 systemd-journald[1111]: Collecting audit messages is disabled. May 9 00:13:59.889607 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:13:59.889619 systemd-journald[1111]: Journal started May 9 00:13:59.889641 systemd-journald[1111]: Runtime Journal (/run/log/journal/db0255f44b664e5993147f748c1ccbdb) is 5.9M, max 47.3M, 41.4M free. May 9 00:13:59.671363 systemd[1]: Queued start job for default target multi-user.target. May 9 00:13:59.689712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:13:59.691813 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:13:59.896155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:13:59.899010 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:13:59.899043 systemd[1]: Stopped verity-setup.service. May 9 00:13:59.903400 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:13:59.904128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:13:59.905357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:13:59.906598 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:13:59.907761 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:13:59.909030 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:13:59.910371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:13:59.911642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:13:59.913138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:13:59.914660 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:13:59.914809 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:13:59.916334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:13:59.916478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:13:59.917931 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:13:59.918071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:13:59.919409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:13:59.919542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:13:59.921161 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:13:59.921325 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:13:59.922674 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:13:59.922815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:13:59.924214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:13:59.925953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:13:59.927531 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:13:59.940053 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:13:59.949982 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:13:59.952343 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:13:59.953663 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:13:59.953702 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:13:59.955777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:13:59.958175 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:13:59.960379 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:13:59.961582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:13:59.962904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:13:59.966120 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:13:59.967432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:13:59.968929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:13:59.970207 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:13:59.973297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:13:59.979075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:13:59.979351 systemd-journald[1111]: Time spent on flushing to /var/log/journal/db0255f44b664e5993147f748c1ccbdb is 20.450ms for 856 entries. May 9 00:13:59.979351 systemd-journald[1111]: System Journal (/var/log/journal/db0255f44b664e5993147f748c1ccbdb) is 8.0M, max 195.6M, 187.6M free. May 9 00:14:00.014619 systemd-journald[1111]: Received client request to flush runtime journal. May 9 00:14:00.014669 kernel: loop0: detected capacity change from 0 to 189592 May 9 00:13:59.984166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:13:59.986738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:13:59.988140 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:13:59.989485 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:13:59.991072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:13:59.995053 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:13:59.997542 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:14:00.011156 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:14:00.017039 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:14:00.018937 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:14:00.020837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:14:00.029215 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:14:00.033308 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:14:00.036033 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 9 00:14:00.036051 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 9 00:14:00.041358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:14:00.043575 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:14:00.044299 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:14:00.053924 kernel: loop1: detected capacity change from 0 to 114328 May 9 00:14:00.054176 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:14:00.080865 kernel: loop2: detected capacity change from 0 to 114432 May 9 00:14:00.084533 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:14:00.094080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:14:00.105935 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 9 00:14:00.105949 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 9 00:14:00.110902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:14:00.111878 kernel: loop3: detected capacity change from 0 to 189592 May 9 00:14:00.120066 kernel: loop4: detected capacity change from 0 to 114328 May 9 00:14:00.127878 kernel: loop5: detected capacity change from 0 to 114432 May 9 00:14:00.131057 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:14:00.131441 (sd-merge)[1182]: Merged extensions into '/usr'. May 9 00:14:00.135057 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:14:00.135075 systemd[1]: Reloading... May 9 00:14:00.170283 zram_generator::config[1206]: No configuration found. May 9 00:14:00.262464 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:14:00.285754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:14:00.322184 systemd[1]: Reloading finished in 186 ms. May 9 00:14:00.354589 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:14:00.356206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:14:00.369065 systemd[1]: Starting ensure-sysext.service... May 9 00:14:00.371350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:14:00.381814 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... May 9 00:14:00.381826 systemd[1]: Reloading... May 9 00:14:00.395907 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:14:00.396172 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:14:00.396826 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:14:00.397072 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 9 00:14:00.397120 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 9 00:14:00.399773 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:14:00.399910 systemd-tmpfiles[1244]: Skipping /boot May 9 00:14:00.408669 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:14:00.408771 systemd-tmpfiles[1244]: Skipping /boot May 9 00:14:00.432872 zram_generator::config[1274]: No configuration found. May 9 00:14:00.511680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:14:00.548263 systemd[1]: Reloading finished in 166 ms. May 9 00:14:00.565053 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:14:00.578331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:14:00.586814 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:14:00.589642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:14:00.592323 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:14:00.598214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:14:00.602099 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:14:00.606546 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:14:00.609699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:14:00.612093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:14:00.615889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:14:00.621166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:14:00.622423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:14:00.624778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:14:00.624973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:14:00.627261 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:14:00.628893 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:14:00.630630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:14:00.630763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:14:00.632566 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:14:00.632685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:14:00.638683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:14:00.638908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:14:00.644558 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:14:00.648922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:14:00.651090 systemd-udevd[1318]: Using default interface naming scheme 'v255'. May 9 00:14:00.657138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:14:00.661889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:14:00.665175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:14:00.672514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:14:00.675698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:14:00.679253 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:14:00.682385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:14:00.684145 systemd[1]: Finished ensure-sysext.service. May 9 00:14:00.686193 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:14:00.689135 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:14:00.690956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:14:00.691147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:14:00.693050 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:14:00.693190 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:14:00.694830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:14:00.695027 augenrules[1352]: No rules May 9 00:14:00.695076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:14:00.697282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:14:00.699361 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:14:00.699480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:14:00.725078 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:14:00.726249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:14:00.726322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:14:00.728924 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:14:00.730165 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:14:00.730542 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:14:00.751101 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 00:14:00.766054 systemd-resolved[1312]: Positive Trust Anchors: May 9 00:14:00.766073 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:14:00.766104 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:14:00.773882 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1347) May 9 00:14:00.799955 systemd-resolved[1312]: Defaulting to hostname 'linux'. May 9 00:14:00.802077 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:14:00.805748 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:14:00.808298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:14:00.809630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:14:00.816497 systemd-networkd[1378]: lo: Link UP May 9 00:14:00.816919 systemd-networkd[1378]: lo: Gained carrier May 9 00:14:00.817648 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:14:00.817972 systemd-networkd[1378]: Enumeration completed May 9 00:14:00.820003 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:14:00.820565 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:14:00.820633 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:14:00.821226 systemd[1]: Reached target network.target - Network. May 9 00:14:00.821525 systemd-networkd[1378]: eth0: Link UP May 9 00:14:00.821584 systemd-networkd[1378]: eth0: Gained carrier May 9 00:14:00.821651 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:14:00.831079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:14:00.833928 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:14:00.834832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:14:00.835950 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. May 9 00:14:01.248999 systemd-resolved[1312]: Clock change detected. Flushing caches. May 9 00:14:01.249219 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:14:01.249335 systemd-timesyncd[1379]: Initial clock synchronization to Fri 2025-05-09 00:14:01.247383 UTC. May 9 00:14:01.252973 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:14:01.272366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:14:01.285419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:14:01.297455 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:14:01.318894 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:14:01.318895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:14:01.353744 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:14:01.355296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:14:01.356422 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:14:01.357645 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:14:01.358939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:14:01.360402 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:14:01.361567 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:14:01.362970 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:14:01.364236 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:14:01.364276 systemd[1]: Reached target paths.target - Path Units. May 9 00:14:01.365152 systemd[1]: Reached target timers.target - Timer Units. May 9 00:14:01.366809 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:14:01.369406 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:14:01.377153 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:14:01.379400 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:14:01.380998 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:14:01.382235 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:14:01.383167 systemd[1]: Reached target basic.target - Basic System. May 9 00:14:01.384126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:14:01.384158 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:14:01.385181 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:14:01.387248 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:14:01.390262 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:14:01.389259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:14:01.392952 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:14:01.394339 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:14:01.396737 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:14:01.399996 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:14:01.402272 jq[1409]: false May 9 00:14:01.405520 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:14:01.409245 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:14:01.415305 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:14:01.417240 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:14:01.417632 dbus-daemon[1408]: [system] SELinux support is enabled May 9 00:14:01.417682 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:14:01.417946 extend-filesystems[1410]: Found loop3 May 9 00:14:01.418782 extend-filesystems[1410]: Found loop4 May 9 00:14:01.418782 extend-filesystems[1410]: Found loop5 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda May 9 00:14:01.418782 extend-filesystems[1410]: Found vda1 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda2 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda3 May 9 00:14:01.418782 extend-filesystems[1410]: Found usr May 9 00:14:01.418782 extend-filesystems[1410]: Found vda4 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda6 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda7 May 9 00:14:01.418782 extend-filesystems[1410]: Found vda9 May 9 00:14:01.418782 extend-filesystems[1410]: Checking size of /dev/vda9 May 9 00:14:01.418331 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:14:01.423069 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:14:01.425034 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:14:01.431153 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:14:01.438447 jq[1425]: true May 9 00:14:01.435463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:14:01.435629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:14:01.435889 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:14:01.436019 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:14:01.439559 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:14:01.439722 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:14:01.451736 (ntainerd)[1431]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:14:01.453303 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:14:01.453376 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:14:01.456451 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:14:01.456482 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:14:01.470281 extend-filesystems[1410]: Resized partition /dev/vda9 May 9 00:14:01.473639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) May 9 00:14:01.473684 update_engine[1419]: I20250509 00:14:01.472085 1419 main.cc:92] Flatcar Update Engine starting May 9 00:14:01.474943 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) May 9 00:14:01.478380 systemd[1]: Started update-engine.service - Update Engine. May 9 00:14:01.481078 jq[1430]: true May 9 00:14:01.481285 update_engine[1419]: I20250509 00:14:01.480928 1419 update_check_scheduler.cc:74] Next update check in 11m15s May 9 00:14:01.484133 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:14:01.495403 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:14:01.499790 tar[1428]: linux-arm64/helm May 9 00:14:01.501318 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) May 9 00:14:01.516510 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:14:01.504644 systemd-logind[1418]: New seat seat0. May 9 00:14:01.507126 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:14:01.518456 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:14:01.518456 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:14:01.518456 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:14:01.525993 extend-filesystems[1410]: Resized filesystem in /dev/vda9 May 9 00:14:01.525649 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:14:01.525829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:14:01.559005 bash[1462]: Updated "/home/core/.ssh/authorized_keys" May 9 00:14:01.560877 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:14:01.564932 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:14:01.590338 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:14:01.675466 containerd[1431]: time="2025-05-09T00:14:01.675374034Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:14:01.705809 containerd[1431]: time="2025-05-09T00:14:01.705736394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.707318 containerd[1431]: time="2025-05-09T00:14:01.707276354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:14:01.707544 containerd[1431]: time="2025-05-09T00:14:01.707463434Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.707621674Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.707847794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.707868514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.707925714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.707939394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708130874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708147714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708160954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708170634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708252074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.708842 containerd[1431]: time="2025-05-09T00:14:01.708440714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:14:01.709134 containerd[1431]: time="2025-05-09T00:14:01.708564914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:14:01.709134 containerd[1431]: time="2025-05-09T00:14:01.708581754Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:14:01.709134 containerd[1431]: time="2025-05-09T00:14:01.708663634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:14:01.709134 containerd[1431]: time="2025-05-09T00:14:01.708702874Z" level=info msg="metadata content store policy set" policy=shared May 9 00:14:01.712488 containerd[1431]: time="2025-05-09T00:14:01.712461314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:14:01.712664 containerd[1431]: time="2025-05-09T00:14:01.712645954Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:14:01.712841 containerd[1431]: time="2025-05-09T00:14:01.712819954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:14:01.712997 containerd[1431]: time="2025-05-09T00:14:01.712979474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:14:01.713125 containerd[1431]: time="2025-05-09T00:14:01.713090114Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:14:01.713437 containerd[1431]: time="2025-05-09T00:14:01.713415474Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:14:01.714079 containerd[1431]: time="2025-05-09T00:14:01.714055514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:14:01.714401 containerd[1431]: time="2025-05-09T00:14:01.714329154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:14:01.714475 containerd[1431]: time="2025-05-09T00:14:01.714459394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:14:01.714604 containerd[1431]: time="2025-05-09T00:14:01.714587834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:14:01.714668 containerd[1431]: time="2025-05-09T00:14:01.714654634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:14:01.714768 containerd[1431]: time="2025-05-09T00:14:01.714753154Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:14:01.714830 containerd[1431]: time="2025-05-09T00:14:01.714817594Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.714920634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.714944594Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.714959314Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.714971234Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.714981754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715001554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715014954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715031794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715046794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715058594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715070514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715086474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715113594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715361 containerd[1431]: time="2025-05-09T00:14:01.715132634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715148514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715160034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715171954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715184314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715199114Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715223434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715236634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:14:01.715658 containerd[1431]: time="2025-05-09T00:14:01.715248514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:14:01.716178 containerd[1431]: time="2025-05-09T00:14:01.715957394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:14:01.716332 containerd[1431]: time="2025-05-09T00:14:01.715989914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:14:01.716332 containerd[1431]: time="2025-05-09T00:14:01.716247834Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:14:01.716332 containerd[1431]: time="2025-05-09T00:14:01.716268314Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:14:01.716332 containerd[1431]: time="2025-05-09T00:14:01.716279034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:14:01.716721 containerd[1431]: time="2025-05-09T00:14:01.716540394Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:14:01.716721 containerd[1431]: time="2025-05-09T00:14:01.716563874Z" level=info msg="NRI interface is disabled by configuration." May 9 00:14:01.716721 containerd[1431]: time="2025-05-09T00:14:01.716575394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:14:01.717772 containerd[1431]: time="2025-05-09T00:14:01.717234634Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:14:01.717772 containerd[1431]: time="2025-05-09T00:14:01.717308754Z" level=info msg="Connect containerd service" May 9 00:14:01.717772 containerd[1431]: time="2025-05-09T00:14:01.717336914Z" level=info msg="using legacy CRI server" May 9 00:14:01.717772 containerd[1431]: time="2025-05-09T00:14:01.717343914Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:14:01.717772 containerd[1431]: time="2025-05-09T00:14:01.717438634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:14:01.719149 containerd[1431]: time="2025-05-09T00:14:01.718448474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:14:01.719149 containerd[1431]: time="2025-05-09T00:14:01.718727914Z" level=info msg="Start subscribing containerd event" May 9 00:14:01.719149 containerd[1431]: time="2025-05-09T00:14:01.719071354Z" level=info msg="Start recovering state" May 9 00:14:01.719273 containerd[1431]: time="2025-05-09T00:14:01.719176634Z" level=info msg="Start event monitor" May 9 00:14:01.719599 containerd[1431]: time="2025-05-09T00:14:01.719449314Z" level=info msg="Start snapshots syncer" May 9 00:14:01.719599 containerd[1431]: time="2025-05-09T00:14:01.719483114Z" level=info msg="Start cni network conf syncer for default" May 9 00:14:01.719599 containerd[1431]: time="2025-05-09T00:14:01.719501554Z" level=info msg="Start streaming server" May 9 00:14:01.719797 containerd[1431]: time="2025-05-09T00:14:01.719764714Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:14:01.719972 containerd[1431]: time="2025-05-09T00:14:01.719947114Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:14:01.720121 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:14:01.723238 containerd[1431]: time="2025-05-09T00:14:01.721541994Z" level=info msg="containerd successfully booted in 0.047650s" May 9 00:14:01.862076 tar[1428]: linux-arm64/LICENSE May 9 00:14:01.862191 tar[1428]: linux-arm64/README.md May 9 00:14:01.874730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:14:02.307247 systemd-networkd[1378]: eth0: Gained IPv6LL May 9 00:14:02.310702 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:14:02.312437 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:14:02.322411 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:14:02.324458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:02.326536 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:14:02.344835 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:14:02.345041 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:14:02.346582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:14:02.349138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:14:02.511430 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:14:02.530703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:14:02.541340 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:14:02.548016 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:14:02.548234 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:14:02.551053 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:14:02.562982 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:14:02.566339 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:14:02.568826 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 00:14:02.570606 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:14:02.817663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:02.819248 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:14:02.821954 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:14:02.824441 systemd[1]: Startup finished in 614ms (kernel) + 4.612s (initrd) + 3.128s (userspace) = 8.356s. May 9 00:14:03.264471 kubelet[1521]: E0509 00:14:03.264346 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:14:03.266193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:14:03.266346 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:14:08.025740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:14:08.026855 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:50760.service - OpenSSH per-connection server daemon (10.0.0.1:50760). May 9 00:14:08.086736 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 50760 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:14:08.088806 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:08.103303 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:14:08.123415 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:14:08.125907 systemd-logind[1418]: New session 1 of user core. May 9 00:14:08.134687 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:14:08.137081 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:14:08.144451 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:14:08.224083 systemd[1539]: Queued start job for default target default.target. May 9 00:14:08.235074 systemd[1539]: Created slice app.slice - User Application Slice. May 9 00:14:08.235123 systemd[1539]: Reached target paths.target - Paths. May 9 00:14:08.235137 systemd[1539]: Reached target timers.target - Timers. May 9 00:14:08.236414 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:14:08.247003 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:14:08.247075 systemd[1539]: Reached target sockets.target - Sockets. May 9 00:14:08.247087 systemd[1539]: Reached target basic.target - Basic System. May 9 00:14:08.247143 systemd[1539]: Reached target default.target - Main User Target. May 9 00:14:08.247173 systemd[1539]: Startup finished in 97ms. May 9 00:14:08.247463 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:14:08.249077 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:14:08.311239 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:50770.service - OpenSSH per-connection server daemon (10.0.0.1:50770). May 9 00:14:08.349718 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 50770 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:14:08.351125 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:08.355177 systemd-logind[1418]: New session 2 of user core. May 9 00:14:08.364270 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:14:08.416071 sshd[1550]: pam_unix(sshd:session): session closed for user core May 9 00:14:08.425476 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:50770.service: Deactivated successfully. May 9 00:14:08.427577 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:14:08.429828 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. May 9 00:14:08.437464 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:50786.service - OpenSSH per-connection server daemon (10.0.0.1:50786). May 9 00:14:08.438499 systemd-logind[1418]: Removed session 2. May 9 00:14:08.470621 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:14:08.471903 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:08.475828 systemd-logind[1418]: New session 3 of user core. May 9 00:14:08.481251 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:14:08.529315 sshd[1557]: pam_unix(sshd:session): session closed for user core May 9 00:14:08.544548 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:50786.service: Deactivated successfully. May 9 00:14:08.546055 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:14:08.547249 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. May 9 00:14:08.563406 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:50798.service - OpenSSH per-connection server daemon (10.0.0.1:50798). May 9 00:14:08.564265 systemd-logind[1418]: Removed session 3. May 9 00:14:08.595684 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 50798 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:14:08.596937 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:08.600584 systemd-logind[1418]: New session 4 of user core. May 9 00:14:08.612269 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:14:08.662977 sshd[1564]: pam_unix(sshd:session): session closed for user core May 9 00:14:08.672205 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:50798.service: Deactivated successfully. May 9 00:14:08.673519 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:14:08.676130 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. May 9 00:14:08.677233 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:50806.service - OpenSSH per-connection server daemon (10.0.0.1:50806). May 9 00:14:08.677956 systemd-logind[1418]: Removed session 4. May 9 00:14:08.713474 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 50806 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:14:08.714681 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:14:08.718474 systemd-logind[1418]: New session 5 of user core. May 9 00:14:08.733239 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:14:08.794461 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:14:08.795083 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:14:09.095419 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:14:09.095467 (dockerd)[1593]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:14:09.348097 dockerd[1593]: time="2025-05-09T00:14:09.347972994Z" level=info msg="Starting up" May 9 00:14:09.471050 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport73063196-merged.mount: Deactivated successfully. May 9 00:14:09.485708 dockerd[1593]: time="2025-05-09T00:14:09.485650474Z" level=info msg="Loading containers: start." May 9 00:14:09.572289 kernel: Initializing XFRM netlink socket May 9 00:14:09.641432 systemd-networkd[1378]: docker0: Link UP May 9 00:14:09.660375 dockerd[1593]: time="2025-05-09T00:14:09.660329674Z" level=info msg="Loading containers: done." May 9 00:14:09.675132 dockerd[1593]: time="2025-05-09T00:14:09.674990394Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:14:09.675132 dockerd[1593]: time="2025-05-09T00:14:09.675080514Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:14:09.675274 dockerd[1593]: time="2025-05-09T00:14:09.675200154Z" level=info msg="Daemon has completed initialization" May 9 00:14:09.700183 dockerd[1593]: time="2025-05-09T00:14:09.700052714Z" level=info msg="API listen on /run/docker.sock" May 9 00:14:09.700267 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:14:10.238568 containerd[1431]: time="2025-05-09T00:14:10.238520994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 00:14:10.838631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856521270.mount: Deactivated successfully. May 9 00:14:12.685519 containerd[1431]: time="2025-05-09T00:14:12.685404514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:12.686391 containerd[1431]: time="2025-05-09T00:14:12.686125154Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 9 00:14:12.687203 containerd[1431]: time="2025-05-09T00:14:12.687166754Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:12.690342 containerd[1431]: time="2025-05-09T00:14:12.690288114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:12.691561 containerd[1431]: time="2025-05-09T00:14:12.691520914Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.45294832s" May 9 00:14:12.691561 containerd[1431]: time="2025-05-09T00:14:12.691559634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 9 00:14:12.692343 containerd[1431]: time="2025-05-09T00:14:12.692163354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 00:14:13.516704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:14:13.526269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:13.630403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:13.633805 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:14:13.666877 kubelet[1802]: E0509 00:14:13.666809 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:14:13.669853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:14:13.670015 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:14:14.181508 containerd[1431]: time="2025-05-09T00:14:14.181345634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:14.182342 containerd[1431]: time="2025-05-09T00:14:14.182107314Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 9 00:14:14.182949 containerd[1431]: time="2025-05-09T00:14:14.182918834Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:14.185965 containerd[1431]: time="2025-05-09T00:14:14.185914914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:14.188004 containerd[1431]: time="2025-05-09T00:14:14.187972274Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.49577504s" May 9 00:14:14.188063 containerd[1431]: time="2025-05-09T00:14:14.188011554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 9 00:14:14.188506 containerd[1431]: time="2025-05-09T00:14:14.188484994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 00:14:15.445314 containerd[1431]: time="2025-05-09T00:14:15.445260674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:15.447094 containerd[1431]: time="2025-05-09T00:14:15.447033314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 9 00:14:15.447814 containerd[1431]: time="2025-05-09T00:14:15.447772194Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:15.451059 containerd[1431]: time="2025-05-09T00:14:15.451012834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:15.452350 containerd[1431]: time="2025-05-09T00:14:15.452218434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.26370308s" May 9 00:14:15.452350 containerd[1431]: time="2025-05-09T00:14:15.452252474Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 9 00:14:15.452996 containerd[1431]: time="2025-05-09T00:14:15.452884274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 00:14:16.425008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046542431.mount: Deactivated successfully. May 9 00:14:16.637667 containerd[1431]: time="2025-05-09T00:14:16.637620354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:16.638563 containerd[1431]: time="2025-05-09T00:14:16.638457914Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 9 00:14:16.639426 containerd[1431]: time="2025-05-09T00:14:16.639395274Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:16.641448 containerd[1431]: time="2025-05-09T00:14:16.641389994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:16.642093 containerd[1431]: time="2025-05-09T00:14:16.641953354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.18903816s" May 9 00:14:16.642093 containerd[1431]: time="2025-05-09T00:14:16.641993394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 9 00:14:16.642716 containerd[1431]: time="2025-05-09T00:14:16.642534674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:14:17.044000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468555195.mount: Deactivated successfully. May 9 00:14:17.787326 containerd[1431]: time="2025-05-09T00:14:17.787275874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:17.788281 containerd[1431]: time="2025-05-09T00:14:17.788235834Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 00:14:17.789323 containerd[1431]: time="2025-05-09T00:14:17.789274634Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:17.792112 containerd[1431]: time="2025-05-09T00:14:17.792068954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:17.793449 containerd[1431]: time="2025-05-09T00:14:17.793328114Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.15076116s" May 9 00:14:17.793449 containerd[1431]: time="2025-05-09T00:14:17.793360114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 00:14:17.793929 containerd[1431]: time="2025-05-09T00:14:17.793907874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:14:18.239346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041462258.mount: Deactivated successfully. May 9 00:14:18.243634 containerd[1431]: time="2025-05-09T00:14:18.243590434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:18.244086 containerd[1431]: time="2025-05-09T00:14:18.244054274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 00:14:18.244858 containerd[1431]: time="2025-05-09T00:14:18.244797594Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:18.247443 containerd[1431]: time="2025-05-09T00:14:18.247405794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:18.248264 containerd[1431]: time="2025-05-09T00:14:18.248183274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 454.24596ms" May 9 00:14:18.248264 containerd[1431]: time="2025-05-09T00:14:18.248213154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 00:14:18.248661 containerd[1431]: time="2025-05-09T00:14:18.248637554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 00:14:18.697155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227331325.mount: Deactivated successfully. May 9 00:14:22.312369 containerd[1431]: time="2025-05-09T00:14:22.312210754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:22.313058 containerd[1431]: time="2025-05-09T00:14:22.313015994Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 9 00:14:22.314114 containerd[1431]: time="2025-05-09T00:14:22.314076954Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:22.317547 containerd[1431]: time="2025-05-09T00:14:22.317475034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:22.318981 containerd[1431]: time="2025-05-09T00:14:22.318939314Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.07026972s" May 9 00:14:22.318981 containerd[1431]: time="2025-05-09T00:14:22.318977874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 9 00:14:23.920292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:14:23.934526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:24.023551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:24.027310 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:14:24.062346 kubelet[1961]: E0509 00:14:24.062290 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:14:24.064848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:14:24.064988 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:14:26.556311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:26.567368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:26.593487 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit session-5.scope)... May 9 00:14:26.593503 systemd[1]: Reloading... May 9 00:14:26.661190 zram_generator::config[2021]: No configuration found. May 9 00:14:26.741828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:14:26.795530 systemd[1]: Reloading finished in 201 ms. May 9 00:14:26.836221 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:26.840229 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:14:26.840475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:26.842250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:26.940943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:26.945811 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:14:26.982219 kubelet[2062]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:14:26.982219 kubelet[2062]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:14:26.982219 kubelet[2062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:14:26.982603 kubelet[2062]: I0509 00:14:26.982497 2062 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:14:27.543159 kubelet[2062]: I0509 00:14:27.541980 2062 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:14:27.543159 kubelet[2062]: I0509 00:14:27.542012 2062 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:14:27.543159 kubelet[2062]: I0509 00:14:27.542271 2062 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:14:27.575376 kubelet[2062]: I0509 00:14:27.575336 2062 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:14:27.575943 kubelet[2062]: E0509 00:14:27.575896 2062 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:27.585383 kubelet[2062]: E0509 00:14:27.585341 2062 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:14:27.585383 kubelet[2062]: I0509 00:14:27.585380 2062 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:14:27.590677 kubelet[2062]: I0509 00:14:27.590614 2062 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:14:27.591555 kubelet[2062]: I0509 00:14:27.591506 2062 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:14:27.591723 kubelet[2062]: I0509 00:14:27.591678 2062 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:14:27.591906 kubelet[2062]: I0509 00:14:27.591718 2062 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:14:27.592052 kubelet[2062]: I0509 00:14:27.592032 2062 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:14:27.592052 kubelet[2062]: I0509 00:14:27.592044 2062 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:14:27.592277 kubelet[2062]: I0509 00:14:27.592255 2062 state_mem.go:36] "Initialized new in-memory state store" May 9 00:14:27.596027 kubelet[2062]: I0509 00:14:27.595701 2062 kubelet.go:408] "Attempting to sync node with API server" May 9 00:14:27.596027 kubelet[2062]: I0509 00:14:27.595739 2062 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:14:27.596027 kubelet[2062]: I0509 00:14:27.595770 2062 kubelet.go:314] "Adding apiserver pod source" May 9 00:14:27.596027 kubelet[2062]: I0509 00:14:27.595782 2062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:14:27.597569 kubelet[2062]: I0509 00:14:27.597538 2062 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:14:27.599772 kubelet[2062]: I0509 00:14:27.599638 2062 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:14:27.600677 kubelet[2062]: W0509 00:14:27.599846 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:27.600677 kubelet[2062]: E0509 00:14:27.599908 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:27.600677 kubelet[2062]: W0509 00:14:27.599897 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:27.600677 kubelet[2062]: E0509 00:14:27.599942 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:27.600677 kubelet[2062]: W0509 00:14:27.600375 2062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:14:27.601781 kubelet[2062]: I0509 00:14:27.601131 2062 server.go:1269] "Started kubelet" May 9 00:14:27.607168 kubelet[2062]: I0509 00:14:27.602347 2062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:14:27.608384 kubelet[2062]: I0509 00:14:27.607561 2062 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:14:27.608384 kubelet[2062]: I0509 00:14:27.607775 2062 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:14:27.608384 kubelet[2062]: I0509 00:14:27.607899 2062 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:14:27.608384 kubelet[2062]: I0509 00:14:27.607961 2062 reconciler.go:26] "Reconciler: start to sync state" May 9 00:14:27.608384 kubelet[2062]: W0509 00:14:27.608316 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:27.608384 kubelet[2062]: E0509 00:14:27.608376 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:27.609382 kubelet[2062]: E0509 00:14:27.608710 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:14:27.609382 kubelet[2062]: I0509 00:14:27.608885 2062 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:14:27.609433 kubelet[2062]: E0509 00:14:27.609409 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" May 9 00:14:27.609777 kubelet[2062]: I0509 00:14:27.609532 2062 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:14:27.609996 kubelet[2062]: I0509 00:14:27.609846 2062 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:14:27.610040 kubelet[2062]: I0509 00:14:27.609991 2062 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:14:27.613093 kubelet[2062]: I0509 00:14:27.613053 2062 factory.go:221] Registration of the containerd container factory successfully May 9 00:14:27.613093 kubelet[2062]: I0509 00:14:27.613079 2062 factory.go:221] Registration of the systemd container factory successfully May 9 00:14:27.613293 kubelet[2062]: E0509 00:14:27.611677 2062 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db39168e5115a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:14:27.601092954 +0000 UTC m=+0.652182881,LastTimestamp:2025-05-09 00:14:27.601092954 +0000 UTC m=+0.652182881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:14:27.613494 kubelet[2062]: E0509 00:14:27.613440 2062 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:14:27.616044 kubelet[2062]: I0509 00:14:27.616010 2062 server.go:460] "Adding debug handlers to kubelet server" May 9 00:14:27.620890 kubelet[2062]: I0509 00:14:27.620836 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:14:27.622134 kubelet[2062]: I0509 00:14:27.622076 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:14:27.622243 kubelet[2062]: I0509 00:14:27.622185 2062 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:14:27.622243 kubelet[2062]: I0509 00:14:27.622205 2062 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:14:27.622288 kubelet[2062]: E0509 00:14:27.622249 2062 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:14:27.623830 kubelet[2062]: W0509 00:14:27.623758 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:27.624269 kubelet[2062]: E0509 00:14:27.624079 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:27.625701 kubelet[2062]: I0509 00:14:27.625676 2062 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:14:27.625701 kubelet[2062]: I0509 00:14:27.625696 2062 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:14:27.625841 kubelet[2062]: I0509 00:14:27.625718 2062 state_mem.go:36] "Initialized new in-memory state store" May 9 00:14:27.627324 kubelet[2062]: I0509 00:14:27.627291 2062 policy_none.go:49] "None policy: Start" May 9 00:14:27.628120 kubelet[2062]: I0509 00:14:27.628046 2062 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:14:27.628526 kubelet[2062]: I0509 00:14:27.628213 2062 state_mem.go:35] "Initializing new in-memory state store" May 9 00:14:27.635374 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:14:27.653353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:14:27.656505 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:14:27.667187 kubelet[2062]: I0509 00:14:27.667153 2062 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:14:27.667526 kubelet[2062]: I0509 00:14:27.667388 2062 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:14:27.667526 kubelet[2062]: I0509 00:14:27.667415 2062 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:14:27.668348 kubelet[2062]: I0509 00:14:27.667898 2062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:14:27.669331 kubelet[2062]: E0509 00:14:27.669304 2062 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:14:27.730727 systemd[1]: Created slice kubepods-burstable-podcedb6ac12ec85b971ae9c0dfc04deef9.slice - libcontainer container kubepods-burstable-podcedb6ac12ec85b971ae9c0dfc04deef9.slice. May 9 00:14:27.751119 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 9 00:14:27.754740 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 9 00:14:27.769164 kubelet[2062]: I0509 00:14:27.769133 2062 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:14:27.769612 kubelet[2062]: E0509 00:14:27.769585 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 9 00:14:27.810163 kubelet[2062]: E0509 00:14:27.810043 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" May 9 00:14:27.909355 kubelet[2062]: I0509 00:14:27.909303 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:27.909454 kubelet[2062]: I0509 00:14:27.909357 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:27.909454 kubelet[2062]: I0509 00:14:27.909385 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:14:27.909454 kubelet[2062]: I0509 00:14:27.909402 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:27.909454 kubelet[2062]: I0509 00:14:27.909417 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:27.909454 kubelet[2062]: I0509 00:14:27.909434 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:27.909610 kubelet[2062]: I0509 00:14:27.909448 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:27.909610 kubelet[2062]: I0509 00:14:27.909462 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:27.909610 kubelet[2062]: I0509 00:14:27.909479 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:27.970730 kubelet[2062]: I0509 00:14:27.970703 2062 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:14:27.971052 kubelet[2062]: E0509 00:14:27.971026 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 9 00:14:28.049338 kubelet[2062]: E0509 00:14:28.049238 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.049933 containerd[1431]: time="2025-05-09T00:14:28.049889114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cedb6ac12ec85b971ae9c0dfc04deef9,Namespace:kube-system,Attempt:0,}" May 9 00:14:28.053319 kubelet[2062]: E0509 00:14:28.053259 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.053753 containerd[1431]: time="2025-05-09T00:14:28.053719354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 9 00:14:28.057300 kubelet[2062]: E0509 00:14:28.057197 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.057663 containerd[1431]: time="2025-05-09T00:14:28.057627354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 9 00:14:28.211447 kubelet[2062]: E0509 00:14:28.211332 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" May 9 00:14:28.372935 kubelet[2062]: I0509 00:14:28.372899 2062 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:14:28.373257 kubelet[2062]: E0509 00:14:28.373236 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 9 00:14:28.422148 kubelet[2062]: W0509 00:14:28.422057 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:28.422286 kubelet[2062]: E0509 00:14:28.422156 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:28.521137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071124577.mount: Deactivated successfully. May 9 00:14:28.526683 containerd[1431]: time="2025-05-09T00:14:28.526577274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:14:28.528136 containerd[1431]: time="2025-05-09T00:14:28.528088154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:14:28.529240 containerd[1431]: time="2025-05-09T00:14:28.529207874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 00:14:28.529757 containerd[1431]: time="2025-05-09T00:14:28.529720834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:14:28.530323 containerd[1431]: time="2025-05-09T00:14:28.530295274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:14:28.531323 containerd[1431]: time="2025-05-09T00:14:28.531273074Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:14:28.532473 containerd[1431]: time="2025-05-09T00:14:28.532412154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:14:28.534737 containerd[1431]: time="2025-05-09T00:14:28.534657114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:14:28.536534 containerd[1431]: time="2025-05-09T00:14:28.536371474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.57812ms" May 9 00:14:28.542899 containerd[1431]: time="2025-05-09T00:14:28.542848034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.87668ms" May 9 00:14:28.543623 containerd[1431]: time="2025-05-09T00:14:28.543510194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.8106ms" May 9 00:14:28.696027 containerd[1431]: time="2025-05-09T00:14:28.695824954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:28.696027 containerd[1431]: time="2025-05-09T00:14:28.695994594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:28.696263 containerd[1431]: time="2025-05-09T00:14:28.696011434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.697574 containerd[1431]: time="2025-05-09T00:14:28.697348194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:28.697574 containerd[1431]: time="2025-05-09T00:14:28.697414194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:28.697574 containerd[1431]: time="2025-05-09T00:14:28.697427154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.697574 containerd[1431]: time="2025-05-09T00:14:28.697517954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.697796 containerd[1431]: time="2025-05-09T00:14:28.697652274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.699265 containerd[1431]: time="2025-05-09T00:14:28.699134754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:28.699265 containerd[1431]: time="2025-05-09T00:14:28.699206634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:28.699265 containerd[1431]: time="2025-05-09T00:14:28.699219074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.699410 containerd[1431]: time="2025-05-09T00:14:28.699341474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:28.724331 systemd[1]: Started cri-containerd-16961c5c493b4e255dc1618c8127c13576dffe8b7367bac38abaf70a2c3dd1dc.scope - libcontainer container 16961c5c493b4e255dc1618c8127c13576dffe8b7367bac38abaf70a2c3dd1dc. May 9 00:14:28.725651 systemd[1]: Started cri-containerd-8df0104886c54089987ab86bb4f94a952efb1e86cdd5c3ddcd3cd761b7ee019b.scope - libcontainer container 8df0104886c54089987ab86bb4f94a952efb1e86cdd5c3ddcd3cd761b7ee019b. May 9 00:14:28.726758 systemd[1]: Started cri-containerd-a471814233cef3ac660aebbc71749468d67ec480da20c3c42657e1b8d27a1390.scope - libcontainer container a471814233cef3ac660aebbc71749468d67ec480da20c3c42657e1b8d27a1390. May 9 00:14:28.741580 kubelet[2062]: W0509 00:14:28.741495 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:28.741580 kubelet[2062]: E0509 00:14:28.741581 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:28.763518 containerd[1431]: time="2025-05-09T00:14:28.761564994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cedb6ac12ec85b971ae9c0dfc04deef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df0104886c54089987ab86bb4f94a952efb1e86cdd5c3ddcd3cd761b7ee019b\"" May 9 00:14:28.764017 kubelet[2062]: E0509 00:14:28.763613 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.767035 containerd[1431]: time="2025-05-09T00:14:28.766892234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"16961c5c493b4e255dc1618c8127c13576dffe8b7367bac38abaf70a2c3dd1dc\"" May 9 00:14:28.769059 containerd[1431]: time="2025-05-09T00:14:28.767873114Z" level=info msg="CreateContainer within sandbox \"8df0104886c54089987ab86bb4f94a952efb1e86cdd5c3ddcd3cd761b7ee019b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:14:28.769123 kubelet[2062]: E0509 00:14:28.768465 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.769888 containerd[1431]: time="2025-05-09T00:14:28.769849034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a471814233cef3ac660aebbc71749468d67ec480da20c3c42657e1b8d27a1390\"" May 9 00:14:28.770856 containerd[1431]: time="2025-05-09T00:14:28.770819874Z" level=info msg="CreateContainer within sandbox \"16961c5c493b4e255dc1618c8127c13576dffe8b7367bac38abaf70a2c3dd1dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:14:28.771060 kubelet[2062]: E0509 00:14:28.771036 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:28.773398 containerd[1431]: time="2025-05-09T00:14:28.773295154Z" level=info msg="CreateContainer within sandbox \"a471814233cef3ac660aebbc71749468d67ec480da20c3c42657e1b8d27a1390\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:14:28.787600 containerd[1431]: time="2025-05-09T00:14:28.787510194Z" level=info msg="CreateContainer within sandbox \"8df0104886c54089987ab86bb4f94a952efb1e86cdd5c3ddcd3cd761b7ee019b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"afd8e3011667d8e7ac24bd1811c0d1f45abc3031362b5991bee41ecf6c086d6b\"" May 9 00:14:28.788381 containerd[1431]: time="2025-05-09T00:14:28.788193994Z" level=info msg="StartContainer for \"afd8e3011667d8e7ac24bd1811c0d1f45abc3031362b5991bee41ecf6c086d6b\"" May 9 00:14:28.788943 containerd[1431]: time="2025-05-09T00:14:28.788910274Z" level=info msg="CreateContainer within sandbox \"16961c5c493b4e255dc1618c8127c13576dffe8b7367bac38abaf70a2c3dd1dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1cd529006f1cf7abd8444b87d833e35c1b07f8c21bdf5b856ac9db62fa910a44\"" May 9 00:14:28.789328 containerd[1431]: time="2025-05-09T00:14:28.789298754Z" level=info msg="StartContainer for \"1cd529006f1cf7abd8444b87d833e35c1b07f8c21bdf5b856ac9db62fa910a44\"" May 9 00:14:28.792467 containerd[1431]: time="2025-05-09T00:14:28.792410874Z" level=info msg="CreateContainer within sandbox \"a471814233cef3ac660aebbc71749468d67ec480da20c3c42657e1b8d27a1390\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4a375c42bdcb8d9f70df7b832ffb8a41a8e133c2b2e0303bcbd12c272efd42c\"" May 9 00:14:28.793062 containerd[1431]: time="2025-05-09T00:14:28.793025474Z" level=info msg="StartContainer for \"c4a375c42bdcb8d9f70df7b832ffb8a41a8e133c2b2e0303bcbd12c272efd42c\"" May 9 00:14:28.822323 systemd[1]: Started cri-containerd-1cd529006f1cf7abd8444b87d833e35c1b07f8c21bdf5b856ac9db62fa910a44.scope - libcontainer container 1cd529006f1cf7abd8444b87d833e35c1b07f8c21bdf5b856ac9db62fa910a44. May 9 00:14:28.823551 systemd[1]: Started cri-containerd-afd8e3011667d8e7ac24bd1811c0d1f45abc3031362b5991bee41ecf6c086d6b.scope - libcontainer container afd8e3011667d8e7ac24bd1811c0d1f45abc3031362b5991bee41ecf6c086d6b. May 9 00:14:28.827149 systemd[1]: Started cri-containerd-c4a375c42bdcb8d9f70df7b832ffb8a41a8e133c2b2e0303bcbd12c272efd42c.scope - libcontainer container c4a375c42bdcb8d9f70df7b832ffb8a41a8e133c2b2e0303bcbd12c272efd42c. May 9 00:14:28.863531 containerd[1431]: time="2025-05-09T00:14:28.862631474Z" level=info msg="StartContainer for \"afd8e3011667d8e7ac24bd1811c0d1f45abc3031362b5991bee41ecf6c086d6b\" returns successfully" May 9 00:14:28.880578 containerd[1431]: time="2025-05-09T00:14:28.880431434Z" level=info msg="StartContainer for \"c4a375c42bdcb8d9f70df7b832ffb8a41a8e133c2b2e0303bcbd12c272efd42c\" returns successfully" May 9 00:14:28.880578 containerd[1431]: time="2025-05-09T00:14:28.880553994Z" level=info msg="StartContainer for \"1cd529006f1cf7abd8444b87d833e35c1b07f8c21bdf5b856ac9db62fa910a44\" returns successfully" May 9 00:14:28.896286 kubelet[2062]: W0509 00:14:28.895098 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:28.896286 kubelet[2062]: E0509 00:14:28.895183 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:28.985312 kubelet[2062]: W0509 00:14:28.985200 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 9 00:14:28.985312 kubelet[2062]: E0509 00:14:28.985254 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 9 00:14:29.012405 kubelet[2062]: E0509 00:14:29.011958 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" May 9 00:14:29.175224 kubelet[2062]: I0509 00:14:29.174786 2062 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:14:29.632220 kubelet[2062]: E0509 00:14:29.631878 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:29.633160 kubelet[2062]: E0509 00:14:29.632996 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:29.637072 kubelet[2062]: E0509 00:14:29.636987 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:30.474237 kubelet[2062]: I0509 00:14:30.474181 2062 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:14:30.599844 kubelet[2062]: I0509 00:14:30.599797 2062 apiserver.go:52] "Watching apiserver" May 9 00:14:30.608912 kubelet[2062]: I0509 00:14:30.608869 2062 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:14:30.643379 kubelet[2062]: E0509 00:14:30.643282 2062 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 9 00:14:30.643564 kubelet[2062]: E0509 00:14:30.643476 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:31.645320 kubelet[2062]: E0509 00:14:31.645276 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:32.412234 systemd[1]: Reloading requested from client PID 2343 ('systemctl') (unit session-5.scope)... May 9 00:14:32.412251 systemd[1]: Reloading... May 9 00:14:32.489164 zram_generator::config[2382]: No configuration found. May 9 00:14:32.592346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:14:32.639473 kubelet[2062]: E0509 00:14:32.639433 2062 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:32.661348 systemd[1]: Reloading finished in 248 ms. May 9 00:14:32.697722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:32.712477 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:14:32.712814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:32.726468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:14:32.818955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:14:32.823218 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:14:32.863894 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:14:32.863894 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:14:32.863894 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:14:32.864330 kubelet[2424]: I0509 00:14:32.863938 2424 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:14:32.872155 kubelet[2424]: I0509 00:14:32.871916 2424 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:14:32.872155 kubelet[2424]: I0509 00:14:32.871949 2424 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:14:32.872309 kubelet[2424]: I0509 00:14:32.872189 2424 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:14:32.873772 kubelet[2424]: I0509 00:14:32.873745 2424 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:14:32.875821 kubelet[2424]: I0509 00:14:32.875784 2424 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:14:32.883128 kubelet[2424]: E0509 00:14:32.880056 2424 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:14:32.883128 kubelet[2424]: I0509 00:14:32.880097 2424 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:14:32.883128 kubelet[2424]: I0509 00:14:32.882864 2424 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:14:32.883128 kubelet[2424]: I0509 00:14:32.882986 2424 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:14:32.883128 kubelet[2424]: I0509 00:14:32.883084 2424 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:14:32.883423 kubelet[2424]: I0509 00:14:32.883126 2424 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:14:32.883506 kubelet[2424]: I0509 00:14:32.883425 2424 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:14:32.883506 kubelet[2424]: I0509 00:14:32.883435 2424 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:14:32.883506 kubelet[2424]: I0509 00:14:32.883471 2424 state_mem.go:36] "Initialized new in-memory state store" May 9 00:14:32.883603 kubelet[2424]: I0509 00:14:32.883587 2424 kubelet.go:408] "Attempting to sync node with API server" May 9 00:14:32.883635 kubelet[2424]: I0509 00:14:32.883608 2424 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:14:32.883659 kubelet[2424]: I0509 00:14:32.883643 2424 kubelet.go:314] "Adding apiserver pod source" May 9 00:14:32.883659 kubelet[2424]: I0509 00:14:32.883653 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:14:32.887685 kubelet[2424]: I0509 00:14:32.885089 2424 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:14:32.887685 kubelet[2424]: I0509 00:14:32.885924 2424 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:14:32.887685 kubelet[2424]: I0509 00:14:32.886662 2424 server.go:1269] "Started kubelet" May 9 00:14:32.887849 kubelet[2424]: I0509 00:14:32.887726 2424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:14:32.892124 kubelet[2424]: I0509 00:14:32.889327 2424 server.go:460] "Adding debug handlers to kubelet server" May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.889387 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.889501 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.895441 2424 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.889533 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.895710 2424 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:14:32.896117 kubelet[2424]: I0509 00:14:32.896003 2424 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:14:32.896346 kubelet[2424]: I0509 00:14:32.896142 2424 reconciler.go:26] "Reconciler: start to sync state" May 9 00:14:32.900609 kubelet[2424]: E0509 00:14:32.900577 2424 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:14:32.900948 kubelet[2424]: E0509 00:14:32.900921 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:14:32.902364 kubelet[2424]: I0509 00:14:32.902335 2424 factory.go:221] Registration of the systemd container factory successfully May 9 00:14:32.902576 kubelet[2424]: I0509 00:14:32.902538 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:14:32.903441 kubelet[2424]: I0509 00:14:32.903389 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:14:32.907757 kubelet[2424]: I0509 00:14:32.907711 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:14:32.907757 kubelet[2424]: I0509 00:14:32.907743 2424 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:14:32.907757 kubelet[2424]: I0509 00:14:32.907762 2424 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:14:32.907903 kubelet[2424]: E0509 00:14:32.907809 2424 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:14:32.914684 kubelet[2424]: I0509 00:14:32.914634 2424 factory.go:221] Registration of the containerd container factory successfully May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.943905 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.943928 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.943950 2424 state_mem.go:36] "Initialized new in-memory state store" May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.944123 2424 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.944136 2424 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:14:32.944157 kubelet[2424]: I0509 00:14:32.944154 2424 policy_none.go:49] "None policy: Start" May 9 00:14:32.944761 kubelet[2424]: I0509 00:14:32.944740 2424 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:14:32.944858 kubelet[2424]: I0509 00:14:32.944847 2424 state_mem.go:35] "Initializing new in-memory state store" May 9 00:14:32.945097 kubelet[2424]: I0509 00:14:32.945080 2424 state_mem.go:75] "Updated machine memory state" May 9 00:14:32.949240 kubelet[2424]: I0509 00:14:32.949151 2424 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:14:32.949513 kubelet[2424]: I0509 00:14:32.949320 2424 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:14:32.949513 kubelet[2424]: I0509 00:14:32.949335 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:14:32.949838 kubelet[2424]: I0509 00:14:32.949695 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:14:33.014588 kubelet[2424]: E0509 00:14:33.014510 2424 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:14:33.052989 kubelet[2424]: I0509 00:14:33.052957 2424 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:14:33.058875 kubelet[2424]: I0509 00:14:33.058835 2424 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 9 00:14:33.059325 kubelet[2424]: I0509 00:14:33.059298 2424 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:14:33.096699 kubelet[2424]: I0509 00:14:33.096595 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:33.096699 kubelet[2424]: I0509 00:14:33.096634 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:33.096699 kubelet[2424]: I0509 00:14:33.096652 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cedb6ac12ec85b971ae9c0dfc04deef9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cedb6ac12ec85b971ae9c0dfc04deef9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:14:33.096699 kubelet[2424]: I0509 00:14:33.096674 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:33.096699 kubelet[2424]: I0509 00:14:33.096710 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:33.097007 kubelet[2424]: I0509 00:14:33.096727 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:14:33.097007 kubelet[2424]: I0509 00:14:33.096742 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:33.097007 kubelet[2424]: I0509 00:14:33.096757 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:33.097007 kubelet[2424]: I0509 00:14:33.096771 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:14:33.314646 kubelet[2424]: E0509 00:14:33.314532 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:33.314646 kubelet[2424]: E0509 00:14:33.314591 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:33.314773 kubelet[2424]: E0509 00:14:33.314720 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:33.884921 kubelet[2424]: I0509 00:14:33.884873 2424 apiserver.go:52] "Watching apiserver" May 9 00:14:33.897074 kubelet[2424]: I0509 00:14:33.897017 2424 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:14:33.929375 kubelet[2424]: E0509 00:14:33.925594 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:33.929375 kubelet[2424]: E0509 00:14:33.926152 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:34.013081 kubelet[2424]: E0509 00:14:34.012885 2424 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 00:14:34.013467 kubelet[2424]: I0509 00:14:34.013318 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.013306212 podStartE2EDuration="1.013306212s" podCreationTimestamp="2025-05-09 00:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:34.013069893 +0000 UTC m=+1.186546740" watchObservedRunningTime="2025-05-09 00:14:34.013306212 +0000 UTC m=+1.186783019" May 9 00:14:34.013683 kubelet[2424]: E0509 00:14:34.013650 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:34.078095 kubelet[2424]: I0509 00:14:34.078023 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.078004407 podStartE2EDuration="3.078004407s" podCreationTimestamp="2025-05-09 00:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:34.054894843 +0000 UTC m=+1.228371690" watchObservedRunningTime="2025-05-09 00:14:34.078004407 +0000 UTC m=+1.251481214" May 9 00:14:34.101526 kubelet[2424]: I0509 00:14:34.101457 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.10144105 podStartE2EDuration="1.10144105s" podCreationTimestamp="2025-05-09 00:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:34.078094007 +0000 UTC m=+1.251570894" watchObservedRunningTime="2025-05-09 00:14:34.10144105 +0000 UTC m=+1.274917897" May 9 00:14:34.277348 sudo[1574]: pam_unix(sudo:session): session closed for user root May 9 00:14:34.283242 sshd[1571]: pam_unix(sshd:session): session closed for user core May 9 00:14:34.288781 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. May 9 00:14:34.288935 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:50806.service: Deactivated successfully. May 9 00:14:34.292411 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:14:34.293231 systemd[1]: session-5.scope: Consumed 5.594s CPU time, 154.2M memory peak, 0B memory swap peak. May 9 00:14:34.296056 systemd-logind[1418]: Removed session 5. May 9 00:14:34.928315 kubelet[2424]: E0509 00:14:34.927959 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:34.928315 kubelet[2424]: E0509 00:14:34.928010 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:35.294964 kubelet[2424]: E0509 00:14:35.294861 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:39.228633 kubelet[2424]: E0509 00:14:39.228323 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:39.519468 kubelet[2424]: I0509 00:14:39.519367 2424 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:14:39.520056 kubelet[2424]: I0509 00:14:39.519961 2424 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:14:39.520098 containerd[1431]: time="2025-05-09T00:14:39.519773579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:14:39.936469 kubelet[2424]: E0509 00:14:39.935702 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:40.540835 kubelet[2424]: I0509 00:14:40.540784 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-cni\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.540835 kubelet[2424]: I0509 00:14:40.540828 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-xtables-lock\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.541470 kubelet[2424]: I0509 00:14:40.540851 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/785a8f4c-2866-4a50-9432-0bfc322c657b-xtables-lock\") pod \"kube-proxy-bnb4j\" (UID: \"785a8f4c-2866-4a50-9432-0bfc322c657b\") " pod="kube-system/kube-proxy-bnb4j" May 9 00:14:40.541470 kubelet[2424]: I0509 00:14:40.540865 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-run\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.541470 kubelet[2424]: I0509 00:14:40.540880 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-cni-plugin\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.541470 kubelet[2424]: I0509 00:14:40.540894 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/785a8f4c-2866-4a50-9432-0bfc322c657b-kube-proxy\") pod \"kube-proxy-bnb4j\" (UID: \"785a8f4c-2866-4a50-9432-0bfc322c657b\") " pod="kube-system/kube-proxy-bnb4j" May 9 00:14:40.541470 kubelet[2424]: I0509 00:14:40.540908 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/785a8f4c-2866-4a50-9432-0bfc322c657b-lib-modules\") pod \"kube-proxy-bnb4j\" (UID: \"785a8f4c-2866-4a50-9432-0bfc322c657b\") " pod="kube-system/kube-proxy-bnb4j" May 9 00:14:40.541596 kubelet[2424]: I0509 00:14:40.540922 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27llf\" (UniqueName: \"kubernetes.io/projected/785a8f4c-2866-4a50-9432-0bfc322c657b-kube-api-access-27llf\") pod \"kube-proxy-bnb4j\" (UID: \"785a8f4c-2866-4a50-9432-0bfc322c657b\") " pod="kube-system/kube-proxy-bnb4j" May 9 00:14:40.541596 kubelet[2424]: I0509 00:14:40.540946 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-flannel-cfg\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.541596 kubelet[2424]: I0509 00:14:40.540959 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbfc\" (UniqueName: \"kubernetes.io/projected/8cbbfab9-2043-4c80-bd4f-7ab1e848cf91-kube-api-access-zxbfc\") pod \"kube-flannel-ds-r966s\" (UID: \"8cbbfab9-2043-4c80-bd4f-7ab1e848cf91\") " pod="kube-flannel/kube-flannel-ds-r966s" May 9 00:14:40.546197 systemd[1]: Created slice kubepods-besteffort-pod785a8f4c_2866_4a50_9432_0bfc322c657b.slice - libcontainer container kubepods-besteffort-pod785a8f4c_2866_4a50_9432_0bfc322c657b.slice. May 9 00:14:40.562408 systemd[1]: Created slice kubepods-burstable-pod8cbbfab9_2043_4c80_bd4f_7ab1e848cf91.slice - libcontainer container kubepods-burstable-pod8cbbfab9_2043_4c80_bd4f_7ab1e848cf91.slice. May 9 00:14:40.862391 kubelet[2424]: E0509 00:14:40.862240 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:40.862948 containerd[1431]: time="2025-05-09T00:14:40.862640491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnb4j,Uid:785a8f4c-2866-4a50-9432-0bfc322c657b,Namespace:kube-system,Attempt:0,}" May 9 00:14:40.864827 kubelet[2424]: E0509 00:14:40.864628 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:40.866240 containerd[1431]: time="2025-05-09T00:14:40.865082723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-r966s,Uid:8cbbfab9-2043-4c80-bd4f-7ab1e848cf91,Namespace:kube-flannel,Attempt:0,}" May 9 00:14:40.893246 containerd[1431]: time="2025-05-09T00:14:40.893165307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:40.893246 containerd[1431]: time="2025-05-09T00:14:40.893213107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:40.893246 containerd[1431]: time="2025-05-09T00:14:40.893223787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:40.893575 containerd[1431]: time="2025-05-09T00:14:40.893304067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:40.895371 containerd[1431]: time="2025-05-09T00:14:40.895205060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:40.895371 containerd[1431]: time="2025-05-09T00:14:40.895258100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:40.895371 containerd[1431]: time="2025-05-09T00:14:40.895273380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:40.895371 containerd[1431]: time="2025-05-09T00:14:40.895344860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:40.908287 systemd[1]: Started cri-containerd-ed719a2aaacb778b86ee1b56de921c5fa3042be206e53543fb234571daaae380.scope - libcontainer container ed719a2aaacb778b86ee1b56de921c5fa3042be206e53543fb234571daaae380. May 9 00:14:40.916202 systemd[1]: Started cri-containerd-7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264.scope - libcontainer container 7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264. May 9 00:14:40.932900 containerd[1431]: time="2025-05-09T00:14:40.932859052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnb4j,Uid:785a8f4c-2866-4a50-9432-0bfc322c657b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed719a2aaacb778b86ee1b56de921c5fa3042be206e53543fb234571daaae380\"" May 9 00:14:40.933652 kubelet[2424]: E0509 00:14:40.933628 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:40.937120 containerd[1431]: time="2025-05-09T00:14:40.936602079Z" level=info msg="CreateContainer within sandbox \"ed719a2aaacb778b86ee1b56de921c5fa3042be206e53543fb234571daaae380\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:14:40.951184 containerd[1431]: time="2025-05-09T00:14:40.951024790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-r966s,Uid:8cbbfab9-2043-4c80-bd4f-7ab1e848cf91,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\"" May 9 00:14:40.951969 kubelet[2424]: E0509 00:14:40.951944 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:40.953708 containerd[1431]: time="2025-05-09T00:14:40.953565541Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 9 00:14:40.954463 containerd[1431]: time="2025-05-09T00:14:40.954354459Z" level=info msg="CreateContainer within sandbox \"ed719a2aaacb778b86ee1b56de921c5fa3042be206e53543fb234571daaae380\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3736c7ccb32dccebbec49c8adb94bf8ee54377684792b838b66eb087fbc48bc2\"" May 9 00:14:40.956058 containerd[1431]: time="2025-05-09T00:14:40.954979936Z" level=info msg="StartContainer for \"3736c7ccb32dccebbec49c8adb94bf8ee54377684792b838b66eb087fbc48bc2\"" May 9 00:14:40.992341 systemd[1]: Started cri-containerd-3736c7ccb32dccebbec49c8adb94bf8ee54377684792b838b66eb087fbc48bc2.scope - libcontainer container 3736c7ccb32dccebbec49c8adb94bf8ee54377684792b838b66eb087fbc48bc2. May 9 00:14:41.013851 containerd[1431]: time="2025-05-09T00:14:41.013491900Z" level=info msg="StartContainer for \"3736c7ccb32dccebbec49c8adb94bf8ee54377684792b838b66eb087fbc48bc2\" returns successfully" May 9 00:14:41.945623 kubelet[2424]: E0509 00:14:41.945286 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:41.955613 kubelet[2424]: I0509 00:14:41.955554 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bnb4j" podStartSLOduration=1.955529048 podStartE2EDuration="1.955529048s" podCreationTimestamp="2025-05-09 00:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:41.955483609 +0000 UTC m=+9.128960536" watchObservedRunningTime="2025-05-09 00:14:41.955529048 +0000 UTC m=+9.129005895" May 9 00:14:42.169171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589400757.mount: Deactivated successfully. May 9 00:14:42.195591 containerd[1431]: time="2025-05-09T00:14:42.195541880Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:42.196572 containerd[1431]: time="2025-05-09T00:14:42.196460157Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 9 00:14:42.197192 containerd[1431]: time="2025-05-09T00:14:42.197162435Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:42.199667 containerd[1431]: time="2025-05-09T00:14:42.199619868Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:42.200703 containerd[1431]: time="2025-05-09T00:14:42.200664785Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.247050964s" May 9 00:14:42.200752 containerd[1431]: time="2025-05-09T00:14:42.200702625Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 9 00:14:42.205257 containerd[1431]: time="2025-05-09T00:14:42.205203571Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 9 00:14:42.220444 containerd[1431]: time="2025-05-09T00:14:42.220357526Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7\"" May 9 00:14:42.221126 containerd[1431]: time="2025-05-09T00:14:42.220984764Z" level=info msg="StartContainer for \"fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7\"" May 9 00:14:42.253339 systemd[1]: Started cri-containerd-fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7.scope - libcontainer container fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7. May 9 00:14:42.276469 containerd[1431]: time="2025-05-09T00:14:42.276325758Z" level=info msg="StartContainer for \"fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7\" returns successfully" May 9 00:14:42.286200 systemd[1]: cri-containerd-fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7.scope: Deactivated successfully. May 9 00:14:42.320958 containerd[1431]: time="2025-05-09T00:14:42.320869785Z" level=info msg="shim disconnected" id=fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7 namespace=k8s.io May 9 00:14:42.320958 containerd[1431]: time="2025-05-09T00:14:42.320956064Z" level=warning msg="cleaning up after shim disconnected" id=fd95d555028551b73e9bd2330b61f5125c4c08ebc9d80f5e0108bab32d846cd7 namespace=k8s.io May 9 00:14:42.320958 containerd[1431]: time="2025-05-09T00:14:42.320965984Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:42.330161 containerd[1431]: time="2025-05-09T00:14:42.330113757Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:14:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:14:42.577179 kubelet[2424]: E0509 00:14:42.577049 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:42.951054 kubelet[2424]: E0509 00:14:42.950594 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:42.951054 kubelet[2424]: E0509 00:14:42.950737 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:42.952274 kubelet[2424]: E0509 00:14:42.951224 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:42.955078 containerd[1431]: time="2025-05-09T00:14:42.954821565Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 9 00:14:43.951578 kubelet[2424]: E0509 00:14:43.951546 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:44.202771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282131538.mount: Deactivated successfully. May 9 00:14:45.153097 containerd[1431]: time="2025-05-09T00:14:45.153047688Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:45.154140 containerd[1431]: time="2025-05-09T00:14:45.153832406Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" May 9 00:14:45.154841 containerd[1431]: time="2025-05-09T00:14:45.154807284Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:45.159081 containerd[1431]: time="2025-05-09T00:14:45.159043073Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:14:45.160855 containerd[1431]: time="2025-05-09T00:14:45.160174711Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.205308226s" May 9 00:14:45.160855 containerd[1431]: time="2025-05-09T00:14:45.160214830Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 9 00:14:45.162455 containerd[1431]: time="2025-05-09T00:14:45.162313905Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:14:45.173806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149515995.mount: Deactivated successfully. May 9 00:14:45.175530 containerd[1431]: time="2025-05-09T00:14:45.175474113Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd\"" May 9 00:14:45.175854 containerd[1431]: time="2025-05-09T00:14:45.175828912Z" level=info msg="StartContainer for \"0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd\"" May 9 00:14:45.204304 systemd[1]: Started cri-containerd-0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd.scope - libcontainer container 0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd. May 9 00:14:45.231285 containerd[1431]: time="2025-05-09T00:14:45.230389577Z" level=info msg="StartContainer for \"0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd\" returns successfully" May 9 00:14:45.231845 systemd[1]: cri-containerd-0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd.scope: Deactivated successfully. May 9 00:14:45.250659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd-rootfs.mount: Deactivated successfully. May 9 00:14:45.274972 kubelet[2424]: I0509 00:14:45.274760 2424 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 00:14:45.308160 kubelet[2424]: E0509 00:14:45.307583 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:45.316594 systemd[1]: Created slice kubepods-burstable-podda8f1306_56ee_4c50_9fa5_cbe13b85b977.slice - libcontainer container kubepods-burstable-podda8f1306_56ee_4c50_9fa5_cbe13b85b977.slice. May 9 00:14:45.321788 systemd[1]: Created slice kubepods-burstable-pode5f66e89_984d_4350_89b1_c40e705a6303.slice - libcontainer container kubepods-burstable-pode5f66e89_984d_4350_89b1_c40e705a6303.slice. May 9 00:14:45.364373 containerd[1431]: time="2025-05-09T00:14:45.364179007Z" level=info msg="shim disconnected" id=0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd namespace=k8s.io May 9 00:14:45.364373 containerd[1431]: time="2025-05-09T00:14:45.364252887Z" level=warning msg="cleaning up after shim disconnected" id=0c66bb5f416741a880eb821111a06d90cdaf449e81a1235c8fadbbfdbd1a19fd namespace=k8s.io May 9 00:14:45.364373 containerd[1431]: time="2025-05-09T00:14:45.364261487Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:14:45.475923 kubelet[2424]: I0509 00:14:45.475725 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxsxn\" (UniqueName: \"kubernetes.io/projected/da8f1306-56ee-4c50-9fa5-cbe13b85b977-kube-api-access-lxsxn\") pod \"coredns-6f6b679f8f-n6sxn\" (UID: \"da8f1306-56ee-4c50-9fa5-cbe13b85b977\") " pod="kube-system/coredns-6f6b679f8f-n6sxn" May 9 00:14:45.475923 kubelet[2424]: I0509 00:14:45.475774 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f66e89-984d-4350-89b1-c40e705a6303-config-volume\") pod \"coredns-6f6b679f8f-vwrz7\" (UID: \"e5f66e89-984d-4350-89b1-c40e705a6303\") " pod="kube-system/coredns-6f6b679f8f-vwrz7" May 9 00:14:45.475923 kubelet[2424]: I0509 00:14:45.475794 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da8f1306-56ee-4c50-9fa5-cbe13b85b977-config-volume\") pod \"coredns-6f6b679f8f-n6sxn\" (UID: \"da8f1306-56ee-4c50-9fa5-cbe13b85b977\") " pod="kube-system/coredns-6f6b679f8f-n6sxn" May 9 00:14:45.475923 kubelet[2424]: I0509 00:14:45.475811 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf9c\" (UniqueName: \"kubernetes.io/projected/e5f66e89-984d-4350-89b1-c40e705a6303-kube-api-access-6tf9c\") pod \"coredns-6f6b679f8f-vwrz7\" (UID: \"e5f66e89-984d-4350-89b1-c40e705a6303\") " pod="kube-system/coredns-6f6b679f8f-vwrz7" May 9 00:14:45.620212 kubelet[2424]: E0509 00:14:45.619672 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:45.621090 containerd[1431]: time="2025-05-09T00:14:45.621048253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n6sxn,Uid:da8f1306-56ee-4c50-9fa5-cbe13b85b977,Namespace:kube-system,Attempt:0,}" May 9 00:14:45.625611 kubelet[2424]: E0509 00:14:45.625338 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:45.625911 containerd[1431]: time="2025-05-09T00:14:45.625861881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwrz7,Uid:e5f66e89-984d-4350-89b1-c40e705a6303,Namespace:kube-system,Attempt:0,}" May 9 00:14:45.960907 kubelet[2424]: E0509 00:14:45.960642 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:45.963176 containerd[1431]: time="2025-05-09T00:14:45.963044208Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 9 00:14:45.988013 containerd[1431]: time="2025-05-09T00:14:45.987960107Z" level=info msg="CreateContainer within sandbox \"7f9e38aeb58a5cb04164a902305f88edaf53747adb2c00afd63c546b0b79d264\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c0c2a47ef7131e85776d74feaaa404ce90fe5b5908577c6ef758a46b8bf58ed4\"" May 9 00:14:45.989075 containerd[1431]: time="2025-05-09T00:14:45.989028464Z" level=info msg="StartContainer for \"c0c2a47ef7131e85776d74feaaa404ce90fe5b5908577c6ef758a46b8bf58ed4\"" May 9 00:14:45.996783 containerd[1431]: time="2025-05-09T00:14:45.996717605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n6sxn,Uid:da8f1306-56ee-4c50-9fa5-cbe13b85b977,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f737cb347c383a75290aaf0322ce56dd7b764be02f5b7a28377b96e30f8c0e84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:14:45.997305 kubelet[2424]: E0509 00:14:45.997265 2424 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f737cb347c383a75290aaf0322ce56dd7b764be02f5b7a28377b96e30f8c0e84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:14:45.997436 kubelet[2424]: E0509 00:14:45.997333 2424 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f737cb347c383a75290aaf0322ce56dd7b764be02f5b7a28377b96e30f8c0e84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-n6sxn" May 9 00:14:45.997436 kubelet[2424]: E0509 00:14:45.997352 2424 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f737cb347c383a75290aaf0322ce56dd7b764be02f5b7a28377b96e30f8c0e84\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-n6sxn" May 9 00:14:45.997436 kubelet[2424]: E0509 00:14:45.997393 2424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-n6sxn_kube-system(da8f1306-56ee-4c50-9fa5-cbe13b85b977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-n6sxn_kube-system(da8f1306-56ee-4c50-9fa5-cbe13b85b977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f737cb347c383a75290aaf0322ce56dd7b764be02f5b7a28377b96e30f8c0e84\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-n6sxn" podUID="da8f1306-56ee-4c50-9fa5-cbe13b85b977" May 9 00:14:45.997850 containerd[1431]: time="2025-05-09T00:14:45.997339364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwrz7,Uid:e5f66e89-984d-4350-89b1-c40e705a6303,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72fe7a9da874c4ef8659f426636c963ae6feb9ffb9b7ed6f5e5deeb17a481459\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:14:45.997988 kubelet[2424]: E0509 00:14:45.997522 2424 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fe7a9da874c4ef8659f426636c963ae6feb9ffb9b7ed6f5e5deeb17a481459\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:14:45.997988 kubelet[2424]: E0509 00:14:45.997569 2424 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fe7a9da874c4ef8659f426636c963ae6feb9ffb9b7ed6f5e5deeb17a481459\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-vwrz7" May 9 00:14:45.997988 kubelet[2424]: E0509 00:14:45.997585 2424 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72fe7a9da874c4ef8659f426636c963ae6feb9ffb9b7ed6f5e5deeb17a481459\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-vwrz7" May 9 00:14:45.997988 kubelet[2424]: E0509 00:14:45.997898 2424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vwrz7_kube-system(e5f66e89-984d-4350-89b1-c40e705a6303)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vwrz7_kube-system(e5f66e89-984d-4350-89b1-c40e705a6303)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72fe7a9da874c4ef8659f426636c963ae6feb9ffb9b7ed6f5e5deeb17a481459\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-vwrz7" podUID="e5f66e89-984d-4350-89b1-c40e705a6303" May 9 00:14:46.022291 systemd[1]: Started cri-containerd-c0c2a47ef7131e85776d74feaaa404ce90fe5b5908577c6ef758a46b8bf58ed4.scope - libcontainer container c0c2a47ef7131e85776d74feaaa404ce90fe5b5908577c6ef758a46b8bf58ed4. May 9 00:14:46.044889 containerd[1431]: time="2025-05-09T00:14:46.044847173Z" level=info msg="StartContainer for \"c0c2a47ef7131e85776d74feaaa404ce90fe5b5908577c6ef758a46b8bf58ed4\" returns successfully" May 9 00:14:46.469161 update_engine[1419]: I20250509 00:14:46.468645 1419 update_attempter.cc:509] Updating boot flags... May 9 00:14:46.504759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2994) May 9 00:14:46.526822 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2996) May 9 00:14:46.557140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2996) May 9 00:14:46.963393 kubelet[2424]: E0509 00:14:46.963281 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:47.119183 systemd-networkd[1378]: flannel.1: Link UP May 9 00:14:47.119191 systemd-networkd[1378]: flannel.1: Gained carrier May 9 00:14:47.964687 kubelet[2424]: E0509 00:14:47.964657 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:49.027267 systemd-networkd[1378]: flannel.1: Gained IPv6LL May 9 00:14:58.908520 kubelet[2424]: E0509 00:14:58.908477 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:58.909415 containerd[1431]: time="2025-05-09T00:14:58.909133501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwrz7,Uid:e5f66e89-984d-4350-89b1-c40e705a6303,Namespace:kube-system,Attempt:0,}" May 9 00:14:58.929996 systemd-networkd[1378]: cni0: Link UP May 9 00:14:58.930003 systemd-networkd[1378]: cni0: Gained carrier May 9 00:14:58.930731 systemd-networkd[1378]: cni0: Lost carrier May 9 00:14:58.936745 systemd-networkd[1378]: vethbdea888d: Link UP May 9 00:14:58.939762 kernel: cni0: port 1(vethbdea888d) entered blocking state May 9 00:14:58.939846 kernel: cni0: port 1(vethbdea888d) entered disabled state May 9 00:14:58.942440 kernel: vethbdea888d: entered allmulticast mode May 9 00:14:58.943135 kernel: vethbdea888d: entered promiscuous mode May 9 00:14:58.943197 kernel: cni0: port 1(vethbdea888d) entered blocking state May 9 00:14:58.945341 kernel: cni0: port 1(vethbdea888d) entered forwarding state May 9 00:14:58.945401 kernel: cni0: port 1(vethbdea888d) entered disabled state May 9 00:14:58.956132 kernel: cni0: port 1(vethbdea888d) entered blocking state May 9 00:14:58.956225 kernel: cni0: port 1(vethbdea888d) entered forwarding state May 9 00:14:58.956567 systemd-networkd[1378]: vethbdea888d: Gained carrier May 9 00:14:58.958080 systemd-networkd[1378]: cni0: Gained carrier May 9 00:14:58.960188 containerd[1431]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 9 00:14:58.960188 containerd[1431]: delegateAdd: netconf sent to delegate plugin: May 9 00:14:58.983600 containerd[1431]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:14:58.983496941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:58.983600 containerd[1431]: time="2025-05-09T00:14:58.983567901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:58.983600 containerd[1431]: time="2025-05-09T00:14:58.983583221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:58.983831 containerd[1431]: time="2025-05-09T00:14:58.983777341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:59.006337 systemd[1]: Started cri-containerd-2fceb580d3ec794dae4b894d8317fd5247b274c9704b62ffb3767ba53638e983.scope - libcontainer container 2fceb580d3ec794dae4b894d8317fd5247b274c9704b62ffb3767ba53638e983. May 9 00:14:59.018961 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:14:59.041652 containerd[1431]: time="2025-05-09T00:14:59.041602762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwrz7,Uid:e5f66e89-984d-4350-89b1-c40e705a6303,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fceb580d3ec794dae4b894d8317fd5247b274c9704b62ffb3767ba53638e983\"" May 9 00:14:59.042993 kubelet[2424]: E0509 00:14:59.042518 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:59.044876 containerd[1431]: time="2025-05-09T00:14:59.044824279Z" level=info msg="CreateContainer within sandbox \"2fceb580d3ec794dae4b894d8317fd5247b274c9704b62ffb3767ba53638e983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:14:59.061141 containerd[1431]: time="2025-05-09T00:14:59.061071463Z" level=info msg="CreateContainer within sandbox \"2fceb580d3ec794dae4b894d8317fd5247b274c9704b62ffb3767ba53638e983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6608d059407ab937060ceb138461ce3261cde4a236115eca34f8d0919fa2624\"" May 9 00:14:59.061650 containerd[1431]: time="2025-05-09T00:14:59.061620222Z" level=info msg="StartContainer for \"b6608d059407ab937060ceb138461ce3261cde4a236115eca34f8d0919fa2624\"" May 9 00:14:59.096325 systemd[1]: Started cri-containerd-b6608d059407ab937060ceb138461ce3261cde4a236115eca34f8d0919fa2624.scope - libcontainer container b6608d059407ab937060ceb138461ce3261cde4a236115eca34f8d0919fa2624. May 9 00:14:59.119762 containerd[1431]: time="2025-05-09T00:14:59.119708564Z" level=info msg="StartContainer for \"b6608d059407ab937060ceb138461ce3261cde4a236115eca34f8d0919fa2624\" returns successfully" May 9 00:14:59.908961 kubelet[2424]: E0509 00:14:59.908774 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:59.909883 containerd[1431]: time="2025-05-09T00:14:59.909169094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n6sxn,Uid:da8f1306-56ee-4c50-9fa5-cbe13b85b977,Namespace:kube-system,Attempt:0,}" May 9 00:14:59.917642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076146187.mount: Deactivated successfully. May 9 00:14:59.927655 systemd-networkd[1378]: vethe6d8e86c: Link UP May 9 00:14:59.929705 kernel: cni0: port 2(vethe6d8e86c) entered blocking state May 9 00:14:59.929887 kernel: cni0: port 2(vethe6d8e86c) entered disabled state May 9 00:14:59.929927 kernel: vethe6d8e86c: entered allmulticast mode May 9 00:14:59.929947 kernel: vethe6d8e86c: entered promiscuous mode May 9 00:14:59.930410 kernel: cni0: port 2(vethe6d8e86c) entered blocking state May 9 00:14:59.933007 kernel: cni0: port 2(vethe6d8e86c) entered forwarding state May 9 00:14:59.933065 kernel: cni0: port 2(vethe6d8e86c) entered disabled state May 9 00:14:59.939052 systemd-networkd[1378]: vethe6d8e86c: Gained carrier May 9 00:14:59.939321 kernel: cni0: port 2(vethe6d8e86c) entered blocking state May 9 00:14:59.939358 kernel: cni0: port 2(vethe6d8e86c) entered forwarding state May 9 00:14:59.941295 containerd[1431]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} May 9 00:14:59.941295 containerd[1431]: delegateAdd: netconf sent to delegate plugin: May 9 00:14:59.956452 containerd[1431]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:14:59.956223527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:14:59.956452 containerd[1431]: time="2025-05-09T00:14:59.956288487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:14:59.956452 containerd[1431]: time="2025-05-09T00:14:59.956299687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:59.956452 containerd[1431]: time="2025-05-09T00:14:59.956383327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:14:59.979274 systemd[1]: Started cri-containerd-8be4af36e5d90bad7427cebea25b63366ddf684a8853cf4ff3d1019fdc245754.scope - libcontainer container 8be4af36e5d90bad7427cebea25b63366ddf684a8853cf4ff3d1019fdc245754. May 9 00:14:59.984705 kubelet[2424]: E0509 00:14:59.984670 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:14:59.992236 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:14:59.994620 kubelet[2424]: I0509 00:14:59.994555 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-r966s" podStartSLOduration=15.786192445 podStartE2EDuration="19.994540289s" podCreationTimestamp="2025-05-09 00:14:40 +0000 UTC" firstStartedPulling="2025-05-09 00:14:40.952555665 +0000 UTC m=+8.126032512" lastFinishedPulling="2025-05-09 00:14:45.160903509 +0000 UTC m=+12.334380356" observedRunningTime="2025-05-09 00:14:46.976710376 +0000 UTC m=+14.150187223" watchObservedRunningTime="2025-05-09 00:14:59.994540289 +0000 UTC m=+27.168017136" May 9 00:15:00.010086 kubelet[2424]: I0509 00:15:00.009885 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vwrz7" podStartSLOduration=20.009848834 podStartE2EDuration="20.009848834s" podCreationTimestamp="2025-05-09 00:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:14:59.995050728 +0000 UTC m=+27.168527575" watchObservedRunningTime="2025-05-09 00:15:00.009848834 +0000 UTC m=+27.183325681" May 9 00:15:00.019972 containerd[1431]: time="2025-05-09T00:15:00.019892785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n6sxn,Uid:da8f1306-56ee-4c50-9fa5-cbe13b85b977,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be4af36e5d90bad7427cebea25b63366ddf684a8853cf4ff3d1019fdc245754\"" May 9 00:15:00.020902 kubelet[2424]: E0509 00:15:00.020874 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:00.023606 containerd[1431]: time="2025-05-09T00:15:00.023570821Z" level=info msg="CreateContainer within sandbox \"8be4af36e5d90bad7427cebea25b63366ddf684a8853cf4ff3d1019fdc245754\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:15:00.042880 containerd[1431]: time="2025-05-09T00:15:00.042837003Z" level=info msg="CreateContainer within sandbox \"8be4af36e5d90bad7427cebea25b63366ddf684a8853cf4ff3d1019fdc245754\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"237fed4a93446079804cdd8e6850cbd93614b05b2c434534ab05682c2209ef7a\"" May 9 00:15:00.049082 containerd[1431]: time="2025-05-09T00:15:00.049004677Z" level=info msg="StartContainer for \"237fed4a93446079804cdd8e6850cbd93614b05b2c434534ab05682c2209ef7a\"" May 9 00:15:00.080323 systemd[1]: Started cri-containerd-237fed4a93446079804cdd8e6850cbd93614b05b2c434534ab05682c2209ef7a.scope - libcontainer container 237fed4a93446079804cdd8e6850cbd93614b05b2c434534ab05682c2209ef7a. May 9 00:15:00.103509 containerd[1431]: time="2025-05-09T00:15:00.103320986Z" level=info msg="StartContainer for \"237fed4a93446079804cdd8e6850cbd93614b05b2c434534ab05682c2209ef7a\" returns successfully" May 9 00:15:00.163307 systemd-networkd[1378]: vethbdea888d: Gained IPv6LL May 9 00:15:00.803370 systemd-networkd[1378]: cni0: Gained IPv6LL May 9 00:15:00.987859 kubelet[2424]: E0509 00:15:00.987360 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:00.987859 kubelet[2424]: E0509 00:15:00.987437 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:00.998939 kubelet[2424]: I0509 00:15:00.998812 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-n6sxn" podStartSLOduration=20.998789787 podStartE2EDuration="20.998789787s" podCreationTimestamp="2025-05-09 00:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:15:00.997941747 +0000 UTC m=+28.171418594" watchObservedRunningTime="2025-05-09 00:15:00.998789787 +0000 UTC m=+28.172266634" May 9 00:15:01.045078 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:47208.service - OpenSSH per-connection server daemon (10.0.0.1:47208). May 9 00:15:01.085182 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 47208 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:01.087032 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:01.091183 systemd-logind[1418]: New session 6 of user core. May 9 00:15:01.100298 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:15:01.220642 sshd[3362]: pam_unix(sshd:session): session closed for user core May 9 00:15:01.224336 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:47208.service: Deactivated successfully. May 9 00:15:01.226025 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:15:01.226813 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. May 9 00:15:01.227926 systemd-logind[1418]: Removed session 6. May 9 00:15:01.571276 systemd-networkd[1378]: vethe6d8e86c: Gained IPv6LL May 9 00:15:01.988948 kubelet[2424]: E0509 00:15:01.988831 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:01.989525 kubelet[2424]: E0509 00:15:01.989460 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:02.990278 kubelet[2424]: E0509 00:15:02.990237 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:15:06.231754 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:53036.service - OpenSSH per-connection server daemon (10.0.0.1:53036). May 9 00:15:06.269160 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 53036 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:06.270634 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:06.274991 systemd-logind[1418]: New session 7 of user core. May 9 00:15:06.280266 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:15:06.385396 sshd[3406]: pam_unix(sshd:session): session closed for user core May 9 00:15:06.388614 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:53036.service: Deactivated successfully. May 9 00:15:06.390278 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:15:06.390867 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. May 9 00:15:06.391761 systemd-logind[1418]: Removed session 7. May 9 00:15:11.401990 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). May 9 00:15:11.448067 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:11.450036 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:11.454695 systemd-logind[1418]: New session 8 of user core. May 9 00:15:11.462273 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:15:11.590501 sshd[3444]: pam_unix(sshd:session): session closed for user core May 9 00:15:11.605970 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:53046.service: Deactivated successfully. May 9 00:15:11.607713 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:15:11.609193 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. May 9 00:15:11.624783 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:53050.service - OpenSSH per-connection server daemon (10.0.0.1:53050). May 9 00:15:11.626240 systemd-logind[1418]: Removed session 8. May 9 00:15:11.663053 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 53050 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:11.664767 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:11.669059 systemd-logind[1418]: New session 9 of user core. May 9 00:15:11.678251 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:15:11.850795 sshd[3459]: pam_unix(sshd:session): session closed for user core May 9 00:15:11.861678 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:53050.service: Deactivated successfully. May 9 00:15:11.865174 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:15:11.866145 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. May 9 00:15:11.874447 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:53058.service - OpenSSH per-connection server daemon (10.0.0.1:53058). May 9 00:15:11.876725 systemd-logind[1418]: Removed session 9. May 9 00:15:11.909939 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 53058 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:11.911272 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:11.915224 systemd-logind[1418]: New session 10 of user core. May 9 00:15:11.921256 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:15:12.031908 sshd[3471]: pam_unix(sshd:session): session closed for user core May 9 00:15:12.035871 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:53058.service: Deactivated successfully. May 9 00:15:12.039078 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:15:12.039968 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. May 9 00:15:12.040859 systemd-logind[1418]: Removed session 10. May 9 00:15:17.063452 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:39268.service - OpenSSH per-connection server daemon (10.0.0.1:39268). May 9 00:15:17.098173 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 39268 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:17.098854 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:17.103077 systemd-logind[1418]: New session 11 of user core. May 9 00:15:17.114342 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:15:17.234687 sshd[3508]: pam_unix(sshd:session): session closed for user core May 9 00:15:17.242231 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:39268.service: Deactivated successfully. May 9 00:15:17.244185 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:15:17.247495 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. May 9 00:15:17.263456 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:39272.service - OpenSSH per-connection server daemon (10.0.0.1:39272). May 9 00:15:17.264599 systemd-logind[1418]: Removed session 11. May 9 00:15:17.296845 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 39272 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:17.298863 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:17.302704 systemd-logind[1418]: New session 12 of user core. May 9 00:15:17.318281 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:15:17.493872 sshd[3528]: pam_unix(sshd:session): session closed for user core May 9 00:15:17.503621 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:39272.service: Deactivated successfully. May 9 00:15:17.505406 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:15:17.507181 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. May 9 00:15:17.523421 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:39276.service - OpenSSH per-connection server daemon (10.0.0.1:39276). May 9 00:15:17.525156 systemd-logind[1418]: Removed session 12. May 9 00:15:17.557456 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 39276 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:17.558663 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:17.562261 systemd-logind[1418]: New session 13 of user core. May 9 00:15:17.573249 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:15:18.747252 sshd[3555]: pam_unix(sshd:session): session closed for user core May 9 00:15:18.757843 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:39276.service: Deactivated successfully. May 9 00:15:18.762777 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:15:18.764739 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. May 9 00:15:18.769372 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:39290.service - OpenSSH per-connection server daemon (10.0.0.1:39290). May 9 00:15:18.773732 systemd-logind[1418]: Removed session 13. May 9 00:15:18.824098 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 39290 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:18.825359 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:18.829443 systemd-logind[1418]: New session 14 of user core. May 9 00:15:18.841277 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:15:19.056768 sshd[3575]: pam_unix(sshd:session): session closed for user core May 9 00:15:19.066247 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:39290.service: Deactivated successfully. May 9 00:15:19.067717 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:15:19.069061 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. May 9 00:15:19.074371 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:39298.service - OpenSSH per-connection server daemon (10.0.0.1:39298). May 9 00:15:19.075292 systemd-logind[1418]: Removed session 14. May 9 00:15:19.108112 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 39298 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:19.109430 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:19.113602 systemd-logind[1418]: New session 15 of user core. May 9 00:15:19.126291 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:15:19.232557 sshd[3587]: pam_unix(sshd:session): session closed for user core May 9 00:15:19.235682 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:39298.service: Deactivated successfully. May 9 00:15:19.237393 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:15:19.239262 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. May 9 00:15:19.240170 systemd-logind[1418]: Removed session 15. May 9 00:15:24.242652 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:60484.service - OpenSSH per-connection server daemon (10.0.0.1:60484). May 9 00:15:24.279725 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:24.280945 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:24.284536 systemd-logind[1418]: New session 16 of user core. May 9 00:15:24.294252 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:15:24.400172 sshd[3625]: pam_unix(sshd:session): session closed for user core May 9 00:15:24.403747 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:60484.service: Deactivated successfully. May 9 00:15:24.406957 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:15:24.407649 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. May 9 00:15:24.408556 systemd-logind[1418]: Removed session 16. May 9 00:15:29.412977 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). May 9 00:15:29.450186 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:29.452635 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:29.456405 systemd-logind[1418]: New session 17 of user core. May 9 00:15:29.464378 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:15:29.582320 sshd[3661]: pam_unix(sshd:session): session closed for user core May 9 00:15:29.585953 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:60492.service: Deactivated successfully. May 9 00:15:29.587703 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:15:29.589262 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. May 9 00:15:29.590180 systemd-logind[1418]: Removed session 17. May 9 00:15:34.593720 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:52882.service - OpenSSH per-connection server daemon (10.0.0.1:52882). May 9 00:15:34.630834 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 52882 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:15:34.632068 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:15:34.635664 systemd-logind[1418]: New session 18 of user core. May 9 00:15:34.646275 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:15:34.751018 sshd[3698]: pam_unix(sshd:session): session closed for user core May 9 00:15:34.754425 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:52882.service: Deactivated successfully. May 9 00:15:34.756085 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:15:34.757626 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. May 9 00:15:34.760538 systemd-logind[1418]: Removed session 18.