May 12 23:41:44.012807 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 12 23:41:44.012828 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon May 12 22:21:23 -00 2025 May 12 23:41:44.012838 kernel: KASLR enabled May 12 23:41:44.012844 kernel: efi: EFI v2.7 by EDK II May 12 23:41:44.012849 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 12 23:41:44.012855 kernel: random: crng init done May 12 23:41:44.012862 kernel: secureboot: Secure boot disabled May 12 23:41:44.012868 kernel: ACPI: Early table checksum verification disabled May 12 23:41:44.012874 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 12 23:41:44.012881 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 12 23:41:44.012887 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012893 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012899 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012905 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012912 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012919 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012926 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012932 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012938 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 23:41:44.012944 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 12 23:41:44.012950 kernel: NUMA: Failed to initialise from firmware May 12 23:41:44.012964 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:41:44.012971 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 12 23:41:44.012977 kernel: Zone ranges: May 12 23:41:44.012983 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:41:44.012991 kernel: DMA32 empty May 12 23:41:44.012997 kernel: Normal empty May 12 23:41:44.013003 kernel: Movable zone start for each node May 12 23:41:44.013010 kernel: Early memory node ranges May 12 23:41:44.013016 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 12 23:41:44.013022 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 12 23:41:44.013028 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 12 23:41:44.013034 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 12 23:41:44.013040 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 12 23:41:44.013046 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 12 23:41:44.013052 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 12 23:41:44.013058 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 12 23:41:44.013066 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 12 23:41:44.013072 kernel: psci: probing for conduit method from ACPI. May 12 23:41:44.013078 kernel: psci: PSCIv1.1 detected in firmware. May 12 23:41:44.013087 kernel: psci: Using standard PSCI v0.2 function IDs May 12 23:41:44.013093 kernel: psci: Trusted OS migration not required May 12 23:41:44.013100 kernel: psci: SMC Calling Convention v1.1 May 12 23:41:44.013108 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 12 23:41:44.013114 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 12 23:41:44.013121 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 12 23:41:44.013128 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 12 23:41:44.013134 kernel: Detected PIPT I-cache on CPU0 May 12 23:41:44.013141 kernel: CPU features: detected: GIC system register CPU interface May 12 23:41:44.013147 kernel: CPU features: detected: Hardware dirty bit management May 12 23:41:44.013154 kernel: CPU features: detected: Spectre-v4 May 12 23:41:44.013160 kernel: CPU features: detected: Spectre-BHB May 12 23:41:44.013167 kernel: CPU features: kernel page table isolation forced ON by KASLR May 12 23:41:44.013175 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 12 23:41:44.013182 kernel: CPU features: detected: ARM erratum 1418040 May 12 23:41:44.013188 kernel: CPU features: detected: SSBS not fully self-synchronizing May 12 23:41:44.013195 kernel: alternatives: applying boot alternatives May 12 23:41:44.013202 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 12 23:41:44.013209 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 23:41:44.013216 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 23:41:44.013222 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 23:41:44.013229 kernel: Fallback order for Node 0: 0 May 12 23:41:44.013236 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 12 23:41:44.013242 kernel: Policy zone: DMA May 12 23:41:44.013262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 23:41:44.013269 kernel: software IO TLB: area num 4. May 12 23:41:44.013283 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 12 23:41:44.013291 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 12 23:41:44.013298 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 23:41:44.013304 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 23:41:44.013311 kernel: rcu: RCU event tracing is enabled. May 12 23:41:44.013318 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 23:41:44.013325 kernel: Trampoline variant of Tasks RCU enabled. May 12 23:41:44.013332 kernel: Tracing variant of Tasks RCU enabled. May 12 23:41:44.013338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 23:41:44.013345 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 23:41:44.013354 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 12 23:41:44.013360 kernel: GICv3: 256 SPIs implemented May 12 23:41:44.013367 kernel: GICv3: 0 Extended SPIs implemented May 12 23:41:44.013375 kernel: Root IRQ handler: gic_handle_irq May 12 23:41:44.013381 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 12 23:41:44.013388 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 12 23:41:44.013394 kernel: ITS [mem 0x08080000-0x0809ffff] May 12 23:41:44.013401 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 12 23:41:44.013408 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 12 23:41:44.013415 kernel: GICv3: using LPI property table @0x00000000400f0000 May 12 23:41:44.013421 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 12 23:41:44.013429 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 23:41:44.013436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:41:44.013442 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 12 23:41:44.013449 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 12 23:41:44.013456 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 12 23:41:44.013462 kernel: arm-pv: using stolen time PV May 12 23:41:44.013470 kernel: Console: colour dummy device 80x25 May 12 23:41:44.013476 kernel: ACPI: Core revision 20230628 May 12 23:41:44.013483 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 12 23:41:44.013490 kernel: pid_max: default: 32768 minimum: 301 May 12 23:41:44.013498 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 23:41:44.013505 kernel: landlock: Up and running. May 12 23:41:44.013512 kernel: SELinux: Initializing. May 12 23:41:44.013518 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 23:41:44.013525 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 23:41:44.013532 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 12 23:41:44.013539 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 23:41:44.013546 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 23:41:44.013553 kernel: rcu: Hierarchical SRCU implementation. May 12 23:41:44.013561 kernel: rcu: Max phase no-delay instances is 400. May 12 23:41:44.013568 kernel: Platform MSI: ITS@0x8080000 domain created May 12 23:41:44.013575 kernel: PCI/MSI: ITS@0x8080000 domain created May 12 23:41:44.013581 kernel: Remapping and enabling EFI services. May 12 23:41:44.013588 kernel: smp: Bringing up secondary CPUs ... May 12 23:41:44.013595 kernel: Detected PIPT I-cache on CPU1 May 12 23:41:44.013602 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 12 23:41:44.013609 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 12 23:41:44.013615 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:41:44.013622 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 12 23:41:44.013630 kernel: Detected PIPT I-cache on CPU2 May 12 23:41:44.013638 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 12 23:41:44.013650 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 12 23:41:44.013659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:41:44.013666 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 12 23:41:44.013673 kernel: Detected PIPT I-cache on CPU3 May 12 23:41:44.013680 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 12 23:41:44.013687 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 12 23:41:44.013695 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 23:41:44.013702 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 12 23:41:44.013711 kernel: smp: Brought up 1 node, 4 CPUs May 12 23:41:44.013718 kernel: SMP: Total of 4 processors activated. May 12 23:41:44.013725 kernel: CPU features: detected: 32-bit EL0 Support May 12 23:41:44.013733 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 12 23:41:44.013740 kernel: CPU features: detected: Common not Private translations May 12 23:41:44.013747 kernel: CPU features: detected: CRC32 instructions May 12 23:41:44.013754 kernel: CPU features: detected: Enhanced Virtualization Traps May 12 23:41:44.013763 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 12 23:41:44.013771 kernel: CPU features: detected: LSE atomic instructions May 12 23:41:44.013778 kernel: CPU features: detected: Privileged Access Never May 12 23:41:44.013785 kernel: CPU features: detected: RAS Extension Support May 12 23:41:44.013793 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 12 23:41:44.013800 kernel: CPU: All CPU(s) started at EL1 May 12 23:41:44.013807 kernel: alternatives: applying system-wide alternatives May 12 23:41:44.013814 kernel: devtmpfs: initialized May 12 23:41:44.013822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 23:41:44.013831 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 23:41:44.013838 kernel: pinctrl core: initialized pinctrl subsystem May 12 23:41:44.013845 kernel: SMBIOS 3.0.0 present. May 12 23:41:44.013853 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 12 23:41:44.013860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 23:41:44.013867 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 12 23:41:44.013875 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 12 23:41:44.013882 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 12 23:41:44.013890 kernel: audit: initializing netlink subsys (disabled) May 12 23:41:44.013898 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 12 23:41:44.013906 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 23:41:44.013913 kernel: cpuidle: using governor menu May 12 23:41:44.013920 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 12 23:41:44.013928 kernel: ASID allocator initialised with 32768 entries May 12 23:41:44.013935 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 23:41:44.013943 kernel: Serial: AMBA PL011 UART driver May 12 23:41:44.013952 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 12 23:41:44.013966 kernel: Modules: 0 pages in range for non-PLT usage May 12 23:41:44.013976 kernel: Modules: 508944 pages in range for PLT usage May 12 23:41:44.013984 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 23:41:44.013991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 12 23:41:44.013999 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 12 23:41:44.014006 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 12 23:41:44.014014 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 23:41:44.014021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 12 23:41:44.014029 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 12 23:41:44.014036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 12 23:41:44.014045 kernel: ACPI: Added _OSI(Module Device) May 12 23:41:44.014056 kernel: ACPI: Added _OSI(Processor Device) May 12 23:41:44.014063 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 23:41:44.014071 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 23:41:44.014078 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 23:41:44.014085 kernel: ACPI: Interpreter enabled May 12 23:41:44.014092 kernel: ACPI: Using GIC for interrupt routing May 12 23:41:44.014099 kernel: ACPI: MCFG table detected, 1 entries May 12 23:41:44.014107 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 12 23:41:44.014116 kernel: printk: console [ttyAMA0] enabled May 12 23:41:44.014124 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 23:41:44.014294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 23:41:44.014383 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 12 23:41:44.014452 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 12 23:41:44.014525 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 12 23:41:44.014590 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 12 23:41:44.014602 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 12 23:41:44.014610 kernel: PCI host bridge to bus 0000:00 May 12 23:41:44.014711 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 12 23:41:44.014774 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 12 23:41:44.014833 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 12 23:41:44.014891 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 23:41:44.014980 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 12 23:41:44.015065 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 12 23:41:44.015135 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 12 23:41:44.015206 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 12 23:41:44.015273 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 12 23:41:44.015383 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 12 23:41:44.015449 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 12 23:41:44.015515 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 12 23:41:44.015579 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 12 23:41:44.015637 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 12 23:41:44.015696 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 12 23:41:44.015705 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 12 23:41:44.015712 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 12 23:41:44.015719 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 12 23:41:44.015727 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 12 23:41:44.015734 kernel: iommu: Default domain type: Translated May 12 23:41:44.015743 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 12 23:41:44.015751 kernel: efivars: Registered efivars operations May 12 23:41:44.015758 kernel: vgaarb: loaded May 12 23:41:44.015765 kernel: clocksource: Switched to clocksource arch_sys_counter May 12 23:41:44.015772 kernel: VFS: Disk quotas dquot_6.6.0 May 12 23:41:44.015780 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 23:41:44.015791 kernel: pnp: PnP ACPI init May 12 23:41:44.015876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 12 23:41:44.015888 kernel: pnp: PnP ACPI: found 1 devices May 12 23:41:44.015896 kernel: NET: Registered PF_INET protocol family May 12 23:41:44.015904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 23:41:44.015912 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 23:41:44.015919 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 23:41:44.015927 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 23:41:44.015934 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 23:41:44.015942 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 23:41:44.015949 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 23:41:44.015964 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 23:41:44.015972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 23:41:44.015979 kernel: PCI: CLS 0 bytes, default 64 May 12 23:41:44.015986 kernel: kvm [1]: HYP mode not available May 12 23:41:44.015993 kernel: Initialise system trusted keyrings May 12 23:41:44.016000 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 23:41:44.016007 kernel: Key type asymmetric registered May 12 23:41:44.016014 kernel: Asymmetric key parser 'x509' registered May 12 23:41:44.016021 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 12 23:41:44.016031 kernel: io scheduler mq-deadline registered May 12 23:41:44.016038 kernel: io scheduler kyber registered May 12 23:41:44.016045 kernel: io scheduler bfq registered May 12 23:41:44.016052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 12 23:41:44.016059 kernel: ACPI: button: Power Button [PWRB] May 12 23:41:44.016067 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 12 23:41:44.016138 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 12 23:41:44.016147 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 23:41:44.016155 kernel: thunder_xcv, ver 1.0 May 12 23:41:44.016164 kernel: thunder_bgx, ver 1.0 May 12 23:41:44.016171 kernel: nicpf, ver 1.0 May 12 23:41:44.016178 kernel: nicvf, ver 1.0 May 12 23:41:44.016268 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 12 23:41:44.016344 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-12T23:41:43 UTC (1747093303) May 12 23:41:44.016362 kernel: hid: raw HID events driver (C) Jiri Kosina May 12 23:41:44.016370 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 12 23:41:44.016377 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 12 23:41:44.016387 kernel: watchdog: Hard watchdog permanently disabled May 12 23:41:44.016395 kernel: NET: Registered PF_INET6 protocol family May 12 23:41:44.016402 kernel: Segment Routing with IPv6 May 12 23:41:44.016409 kernel: In-situ OAM (IOAM) with IPv6 May 12 23:41:44.016416 kernel: NET: Registered PF_PACKET protocol family May 12 23:41:44.016423 kernel: Key type dns_resolver registered May 12 23:41:44.016431 kernel: registered taskstats version 1 May 12 23:41:44.016438 kernel: Loading compiled-in X.509 certificates May 12 23:41:44.016445 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f172f0fb4eac06c214e4b9ce0f39d6c4075ccc9a' May 12 23:41:44.016453 kernel: Key type .fscrypt registered May 12 23:41:44.016461 kernel: Key type fscrypt-provisioning registered May 12 23:41:44.016468 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 23:41:44.016475 kernel: ima: Allocated hash algorithm: sha1 May 12 23:41:44.016482 kernel: ima: No architecture policies found May 12 23:41:44.016489 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 12 23:41:44.016496 kernel: clk: Disabling unused clocks May 12 23:41:44.016503 kernel: Freeing unused kernel memory: 39744K May 12 23:41:44.016510 kernel: Run /init as init process May 12 23:41:44.016519 kernel: with arguments: May 12 23:41:44.016526 kernel: /init May 12 23:41:44.016533 kernel: with environment: May 12 23:41:44.016540 kernel: HOME=/ May 12 23:41:44.016547 kernel: TERM=linux May 12 23:41:44.016554 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 23:41:44.016562 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 12 23:41:44.016572 systemd[1]: Detected virtualization kvm. May 12 23:41:44.016581 systemd[1]: Detected architecture arm64. May 12 23:41:44.016588 systemd[1]: Running in initrd. May 12 23:41:44.016596 systemd[1]: No hostname configured, using default hostname. May 12 23:41:44.016603 systemd[1]: Hostname set to . May 12 23:41:44.016611 systemd[1]: Initializing machine ID from VM UUID. May 12 23:41:44.016618 systemd[1]: Queued start job for default target initrd.target. May 12 23:41:44.016626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:41:44.016633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:41:44.016643 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 23:41:44.016651 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:41:44.016659 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 23:41:44.016666 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 23:41:44.016675 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 23:41:44.016683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 23:41:44.016692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:41:44.016700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:41:44.016708 systemd[1]: Reached target paths.target - Path Units. May 12 23:41:44.016715 systemd[1]: Reached target slices.target - Slice Units. May 12 23:41:44.016723 systemd[1]: Reached target swap.target - Swaps. May 12 23:41:44.016731 systemd[1]: Reached target timers.target - Timer Units. May 12 23:41:44.016738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:41:44.016746 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:41:44.016754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 23:41:44.016763 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 12 23:41:44.016771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:41:44.016778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:41:44.016786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:41:44.016794 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:41:44.016802 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 23:41:44.016809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:41:44.016817 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 23:41:44.016825 systemd[1]: Starting systemd-fsck-usr.service... May 12 23:41:44.016834 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:41:44.016842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:41:44.016850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:41:44.016857 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 23:41:44.016865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:41:44.016873 systemd[1]: Finished systemd-fsck-usr.service. May 12 23:41:44.016882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:41:44.016906 systemd-journald[240]: Collecting audit messages is disabled. May 12 23:41:44.016927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:41:44.016936 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 23:41:44.016943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:41:44.016953 systemd-journald[240]: Journal started May 12 23:41:44.016977 systemd-journald[240]: Runtime Journal (/run/log/journal/9181780c82fb4c4791a556270d929576) is 5.9M, max 47.3M, 41.4M free. May 12 23:41:43.999515 systemd-modules-load[241]: Inserted module 'overlay' May 12 23:41:44.022535 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:41:44.022575 kernel: Bridge firewalling registered May 12 23:41:44.021778 systemd-modules-load[241]: Inserted module 'br_netfilter' May 12 23:41:44.023352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:41:44.025176 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:41:44.041521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:41:44.043409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:41:44.045463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:41:44.047481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:41:44.052028 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 23:41:44.057884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:41:44.063059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:41:44.066095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:41:44.070536 dracut-cmdline[271]: dracut-dracut-053 May 12 23:41:44.071951 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 12 23:41:44.081507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:41:44.108574 systemd-resolved[286]: Positive Trust Anchors: May 12 23:41:44.108653 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:41:44.108684 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:41:44.115433 systemd-resolved[286]: Defaulting to hostname 'linux'. May 12 23:41:44.116697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:41:44.119117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:41:44.155326 kernel: SCSI subsystem initialized May 12 23:41:44.164310 kernel: Loading iSCSI transport class v2.0-870. May 12 23:41:44.172350 kernel: iscsi: registered transport (tcp) May 12 23:41:44.186512 kernel: iscsi: registered transport (qla4xxx) May 12 23:41:44.186577 kernel: QLogic iSCSI HBA Driver May 12 23:41:44.243293 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 23:41:44.259482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 23:41:44.275305 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 23:41:44.275382 kernel: device-mapper: uevent: version 1.0.3 May 12 23:41:44.281293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 23:41:44.333317 kernel: raid6: neonx8 gen() 15203 MB/s May 12 23:41:44.352310 kernel: raid6: neonx4 gen() 14342 MB/s May 12 23:41:44.370304 kernel: raid6: neonx2 gen() 8796 MB/s May 12 23:41:44.387304 kernel: raid6: neonx1 gen() 10001 MB/s May 12 23:41:44.404300 kernel: raid6: int64x8 gen() 6962 MB/s May 12 23:41:44.421300 kernel: raid6: int64x4 gen() 7341 MB/s May 12 23:41:44.438304 kernel: raid6: int64x2 gen() 6042 MB/s May 12 23:41:44.455416 kernel: raid6: int64x1 gen() 5046 MB/s May 12 23:41:44.455435 kernel: raid6: using algorithm neonx8 gen() 15203 MB/s May 12 23:41:44.473390 kernel: raid6: .... xor() 11921 MB/s, rmw enabled May 12 23:41:44.473407 kernel: raid6: using neon recovery algorithm May 12 23:41:44.478304 kernel: xor: measuring software checksum speed May 12 23:41:44.479523 kernel: 8regs : 17433 MB/sec May 12 23:41:44.479535 kernel: 32regs : 19004 MB/sec May 12 23:41:44.480704 kernel: arm64_neon : 26892 MB/sec May 12 23:41:44.480722 kernel: xor: using function: arm64_neon (26892 MB/sec) May 12 23:41:44.531314 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 23:41:44.542123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 23:41:44.557469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:41:44.569257 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 12 23:41:44.572403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:41:44.583505 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 23:41:44.597053 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 12 23:41:44.625088 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:41:44.637439 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:41:44.678635 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:41:44.690753 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 23:41:44.705212 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 23:41:44.707102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:41:44.711009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:41:44.713407 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:41:44.720476 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 23:41:44.731499 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 12 23:41:44.731677 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 23:41:44.741622 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 23:41:44.746516 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 23:41:44.746546 kernel: GPT:9289727 != 19775487 May 12 23:41:44.746556 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 23:41:44.746574 kernel: GPT:9289727 != 19775487 May 12 23:41:44.747424 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 23:41:44.747735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:41:44.749152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:41:44.747846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:41:44.752834 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:41:44.754106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:41:44.761846 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) May 12 23:41:44.754255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:41:44.767384 kernel: BTRFS: device fsid 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (511) May 12 23:41:44.760395 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:41:44.781614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:41:44.792602 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 23:41:44.794126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:41:44.800756 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 23:41:44.805388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 23:41:44.814700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 23:41:44.815920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 23:41:44.826941 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 23:41:44.830473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 23:41:44.836440 disk-uuid[549]: Primary Header is updated. May 12 23:41:44.836440 disk-uuid[549]: Secondary Entries is updated. May 12 23:41:44.836440 disk-uuid[549]: Secondary Header is updated. May 12 23:41:44.845424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:41:44.848440 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:41:45.865215 disk-uuid[550]: The operation has completed successfully. May 12 23:41:45.866523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 23:41:45.884708 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 23:41:45.884806 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 23:41:45.907438 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 23:41:45.910076 sh[569]: Success May 12 23:41:45.930296 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 12 23:41:45.956947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 23:41:45.970634 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 23:41:45.974496 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 23:41:45.984237 kernel: BTRFS info (device dm-0): first mount of filesystem 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a May 12 23:41:45.984283 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 12 23:41:45.984297 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 23:41:45.986357 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 23:41:45.986374 kernel: BTRFS info (device dm-0): using free space tree May 12 23:41:45.991593 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 23:41:45.992752 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 23:41:46.008475 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 23:41:46.010156 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 23:41:46.019416 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:41:46.019464 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:41:46.019475 kernel: BTRFS info (device vda6): using free space tree May 12 23:41:46.026325 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:41:46.035320 systemd[1]: mnt-oem.mount: Deactivated successfully. May 12 23:41:46.038399 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:41:46.046975 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 23:41:46.055461 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 23:41:46.114021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:41:46.123473 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:41:46.148455 systemd-networkd[763]: lo: Link UP May 12 23:41:46.148467 systemd-networkd[763]: lo: Gained carrier May 12 23:41:46.149224 systemd-networkd[763]: Enumeration completed May 12 23:41:46.149487 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:41:46.149721 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:41:46.149724 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 23:41:46.150783 systemd-networkd[763]: eth0: Link UP May 12 23:41:46.150787 systemd-networkd[763]: eth0: Gained carrier May 12 23:41:46.150793 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:41:46.151600 systemd[1]: Reached target network.target - Network. May 12 23:41:46.163743 ignition[676]: Ignition 2.20.0 May 12 23:41:46.163753 ignition[676]: Stage: fetch-offline May 12 23:41:46.163790 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 12 23:41:46.163798 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:46.163982 ignition[676]: parsed url from cmdline: "" May 12 23:41:46.163986 ignition[676]: no config URL provided May 12 23:41:46.163990 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 12 23:41:46.163998 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 12 23:41:46.164025 ignition[676]: op(1): [started] loading QEMU firmware config module May 12 23:41:46.164029 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 23:41:46.171657 ignition[676]: op(1): [finished] loading QEMU firmware config module May 12 23:41:46.173330 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 23:41:46.194627 ignition[676]: parsing config with SHA512: de0976def9f55c4dbc3009dbb931700a4f151bdaccdcdbc63fac33ce9a6f678fd256cc98a68bd24c14c3c60a2ffd8492887d3816d5004c3d6b972ff9639d76a3 May 12 23:41:46.201742 unknown[676]: fetched base config from "system" May 12 23:41:46.201755 unknown[676]: fetched user config from "qemu" May 12 23:41:46.202180 ignition[676]: fetch-offline: fetch-offline passed May 12 23:41:46.204355 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:41:46.202265 ignition[676]: Ignition finished successfully May 12 23:41:46.207876 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 23:41:46.211446 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 23:41:46.225085 ignition[775]: Ignition 2.20.0 May 12 23:41:46.225095 ignition[775]: Stage: kargs May 12 23:41:46.225261 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 12 23:41:46.225271 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:46.228999 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 23:41:46.226186 ignition[775]: kargs: kargs passed May 12 23:41:46.226236 ignition[775]: Ignition finished successfully May 12 23:41:46.242464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 23:41:46.251818 ignition[783]: Ignition 2.20.0 May 12 23:41:46.251829 ignition[783]: Stage: disks May 12 23:41:46.252021 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 12 23:41:46.254787 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 23:41:46.252031 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:46.256975 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 23:41:46.253012 ignition[783]: disks: disks passed May 12 23:41:46.258683 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 23:41:46.253058 ignition[783]: Ignition finished successfully May 12 23:41:46.260818 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:41:46.262739 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:41:46.264214 systemd[1]: Reached target basic.target - Basic System. May 12 23:41:46.275446 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 23:41:46.294225 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 12 23:41:46.298376 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 23:41:46.310381 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 23:41:46.359291 kernel: EXT4-fs (vda9): mounted filesystem 267e1a87-2243-4e28-a518-ba9876b017ec r/w with ordered data mode. Quota mode: none. May 12 23:41:46.359617 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 23:41:46.360873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 23:41:46.374364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:41:46.377403 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 23:41:46.378544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 23:41:46.384376 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) May 12 23:41:46.384398 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:41:46.384416 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:41:46.378589 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 23:41:46.389092 kernel: BTRFS info (device vda6): using free space tree May 12 23:41:46.389115 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:41:46.378613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:41:46.384161 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 23:41:46.391566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:41:46.394092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 23:41:46.439108 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory May 12 23:41:46.443646 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory May 12 23:41:46.447959 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory May 12 23:41:46.452159 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory May 12 23:41:46.518112 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 23:41:46.529433 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 23:41:46.532224 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 23:41:46.538292 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:41:46.553252 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 23:41:46.555702 ignition[916]: INFO : Ignition 2.20.0 May 12 23:41:46.555702 ignition[916]: INFO : Stage: mount May 12 23:41:46.557937 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:41:46.557937 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:46.557937 ignition[916]: INFO : mount: mount passed May 12 23:41:46.557937 ignition[916]: INFO : Ignition finished successfully May 12 23:41:46.558596 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 23:41:46.565361 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 23:41:46.982992 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 23:41:46.997482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 23:41:47.004326 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) May 12 23:41:47.004368 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 12 23:41:47.004379 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 23:41:47.006289 kernel: BTRFS info (device vda6): using free space tree May 12 23:41:47.008300 kernel: BTRFS info (device vda6): auto enabling async discard May 12 23:41:47.009441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 23:41:47.040776 ignition[946]: INFO : Ignition 2.20.0 May 12 23:41:47.040776 ignition[946]: INFO : Stage: files May 12 23:41:47.042405 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:41:47.042405 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:47.042405 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 12 23:41:47.045956 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 23:41:47.045956 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 23:41:47.049301 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 23:41:47.050655 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 23:41:47.050655 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 23:41:47.049889 unknown[946]: wrote ssh authorized keys file for user: core May 12 23:41:47.054439 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 23:41:47.054439 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 12 23:41:47.710662 systemd-networkd[763]: eth0: Gained IPv6LL May 12 23:41:48.002544 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 23:41:52.062902 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 23:41:52.062902 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 12 23:41:52.067231 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 12 23:41:52.435258 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 12 23:41:53.321668 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 12 23:41:53.321668 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 12 23:41:53.325823 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 12 23:41:53.365857 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:41:53.370936 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 23:41:53.372600 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 12 23:41:53.372600 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 12 23:41:53.372600 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 12 23:41:53.372600 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 23:41:53.372600 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 23:41:53.372600 ignition[946]: INFO : files: files passed May 12 23:41:53.372600 ignition[946]: INFO : Ignition finished successfully May 12 23:41:53.374084 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 23:41:53.389510 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 23:41:53.392370 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 23:41:53.394098 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 23:41:53.394191 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 23:41:53.401761 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 12 23:41:53.405549 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:41:53.405549 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 23:41:53.409263 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 23:41:53.411125 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:41:53.414524 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 23:41:53.423446 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 23:41:53.460649 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 23:41:53.460802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 23:41:53.463615 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 23:41:53.465346 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 23:41:53.467466 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 23:41:53.477477 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 23:41:53.498670 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:41:53.511495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 23:41:53.520673 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 23:41:53.522019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:41:53.524155 systemd[1]: Stopped target timers.target - Timer Units. May 12 23:41:53.526037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 23:41:53.526177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 23:41:53.528942 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 23:41:53.531220 systemd[1]: Stopped target basic.target - Basic System. May 12 23:41:53.533020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 23:41:53.534821 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 23:41:53.536927 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 23:41:53.539010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 23:41:53.541048 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 23:41:53.543259 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 23:41:53.545497 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 23:41:53.547503 systemd[1]: Stopped target swap.target - Swaps. May 12 23:41:53.549176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 23:41:53.549323 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 23:41:53.551956 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 23:41:53.554150 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:41:53.556419 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 23:41:53.557520 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:41:53.559736 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 23:41:53.559873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 23:41:53.566861 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 23:41:53.567007 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 23:41:53.572992 systemd[1]: Stopped target paths.target - Path Units. May 12 23:41:53.574114 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 23:41:53.577584 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:41:53.579107 systemd[1]: Stopped target slices.target - Slice Units. May 12 23:41:53.581596 systemd[1]: Stopped target sockets.target - Socket Units. May 12 23:41:53.583257 systemd[1]: iscsid.socket: Deactivated successfully. May 12 23:41:53.583414 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 23:41:53.585071 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 23:41:53.585198 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 23:41:53.586823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 23:41:53.586998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 23:41:53.588869 systemd[1]: ignition-files.service: Deactivated successfully. May 12 23:41:53.589028 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 23:41:53.599502 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 23:41:53.601233 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 23:41:53.602129 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 23:41:53.602328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:41:53.604513 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 23:41:53.604667 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 23:41:53.611954 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 23:41:53.612052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 23:41:53.617752 ignition[1001]: INFO : Ignition 2.20.0 May 12 23:41:53.619711 ignition[1001]: INFO : Stage: umount May 12 23:41:53.619711 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 23:41:53.619711 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 23:41:53.618990 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 23:41:53.625343 ignition[1001]: INFO : umount: umount passed May 12 23:41:53.625343 ignition[1001]: INFO : Ignition finished successfully May 12 23:41:53.622619 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 23:41:53.622708 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 23:41:53.624529 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 23:41:53.624600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 23:41:53.627234 systemd[1]: Stopped target network.target - Network. May 12 23:41:53.628762 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 23:41:53.628826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 23:41:53.630502 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 23:41:53.630551 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 23:41:53.632639 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 23:41:53.632683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 23:41:53.636165 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 23:41:53.636222 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 23:41:53.638159 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 23:41:53.638203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 23:41:53.639739 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 23:41:53.641569 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 23:41:53.648613 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 23:41:53.650343 systemd-networkd[763]: eth0: DHCPv6 lease lost May 12 23:41:53.650358 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 23:41:53.652554 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 23:41:53.652674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 23:41:53.655740 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 23:41:53.655793 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 23:41:53.671413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 23:41:53.672391 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 23:41:53.672461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 23:41:53.674569 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 23:41:53.674618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 23:41:53.676711 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 23:41:53.676760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 23:41:53.679346 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 23:41:53.679393 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:41:53.681711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:41:53.691204 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 23:41:53.691450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 23:41:53.697370 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 23:41:53.697574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:41:53.700081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 23:41:53.700132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 23:41:53.701907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 23:41:53.701949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:41:53.704042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 23:41:53.704093 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 23:41:53.706924 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 23:41:53.706971 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 23:41:53.709764 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 23:41:53.709806 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 23:41:53.721475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 23:41:53.722625 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 23:41:53.722693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:41:53.725066 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 12 23:41:53.725119 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:41:53.727221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 23:41:53.727266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:41:53.729688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 23:41:53.729742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:41:53.732305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 23:41:53.732398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 23:41:53.735112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 23:41:53.737179 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 23:41:53.749081 systemd[1]: Switching root. May 12 23:41:53.778284 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 12 23:41:53.778344 systemd-journald[240]: Journal stopped May 12 23:41:54.588524 kernel: SELinux: policy capability network_peer_controls=1 May 12 23:41:54.588574 kernel: SELinux: policy capability open_perms=1 May 12 23:41:54.588585 kernel: SELinux: policy capability extended_socket_class=1 May 12 23:41:54.588595 kernel: SELinux: policy capability always_check_network=0 May 12 23:41:54.588606 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 23:41:54.588618 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 23:41:54.588628 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 23:41:54.588637 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 23:41:54.588647 kernel: audit: type=1403 audit(1747093313.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 23:41:54.588661 systemd[1]: Successfully loaded SELinux policy in 37.284ms. May 12 23:41:54.588680 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.884ms. May 12 23:41:54.588691 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 12 23:41:54.588702 systemd[1]: Detected virtualization kvm. May 12 23:41:54.588713 systemd[1]: Detected architecture arm64. May 12 23:41:54.588724 systemd[1]: Detected first boot. May 12 23:41:54.588734 systemd[1]: Initializing machine ID from VM UUID. May 12 23:41:54.588746 zram_generator::config[1045]: No configuration found. May 12 23:41:54.588760 systemd[1]: Populated /etc with preset unit settings. May 12 23:41:54.588771 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 23:41:54.588781 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 23:41:54.588791 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 23:41:54.588802 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 23:41:54.588814 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 23:41:54.588825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 23:41:54.588835 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 23:41:54.588846 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 23:41:54.588857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 23:41:54.588867 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 23:41:54.588877 systemd[1]: Created slice user.slice - User and Session Slice. May 12 23:41:54.588887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 23:41:54.588898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 23:41:54.588910 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 23:41:54.588931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 23:41:54.588942 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 23:41:54.588953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 23:41:54.588963 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 12 23:41:54.588974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 23:41:54.588984 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 23:41:54.588994 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 23:41:54.589005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 23:41:54.589017 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 23:41:54.589027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 23:41:54.589038 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 23:41:54.589048 systemd[1]: Reached target slices.target - Slice Units. May 12 23:41:54.589058 systemd[1]: Reached target swap.target - Swaps. May 12 23:41:54.589068 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 23:41:54.589078 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 23:41:54.589089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 23:41:54.589102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 23:41:54.589112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 23:41:54.589122 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 23:41:54.589136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 23:41:54.589147 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 23:41:54.589157 systemd[1]: Mounting media.mount - External Media Directory... May 12 23:41:54.589167 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 23:41:54.589177 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 23:41:54.589188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 23:41:54.589200 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 23:41:54.589210 systemd[1]: Reached target machines.target - Containers. May 12 23:41:54.589221 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 23:41:54.589231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:41:54.589242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 23:41:54.589252 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 23:41:54.589262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:41:54.589272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:41:54.589298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:41:54.589309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 23:41:54.589319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:41:54.589330 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 23:41:54.589340 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 23:41:54.589351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 23:41:54.589362 kernel: fuse: init (API version 7.39) May 12 23:41:54.589372 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 23:41:54.589382 systemd[1]: Stopped systemd-fsck-usr.service. May 12 23:41:54.589393 kernel: loop: module loaded May 12 23:41:54.589403 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 23:41:54.589414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 23:41:54.589424 kernel: ACPI: bus type drm_connector registered May 12 23:41:54.589434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 23:41:54.589444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 23:41:54.589455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 23:41:54.589465 systemd[1]: verity-setup.service: Deactivated successfully. May 12 23:41:54.589475 systemd[1]: Stopped verity-setup.service. May 12 23:41:54.589487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 23:41:54.589498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 23:41:54.589508 systemd[1]: Mounted media.mount - External Media Directory. May 12 23:41:54.589535 systemd-journald[1116]: Collecting audit messages is disabled. May 12 23:41:54.589566 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 23:41:54.589577 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 23:41:54.589590 systemd-journald[1116]: Journal started May 12 23:41:54.589612 systemd-journald[1116]: Runtime Journal (/run/log/journal/9181780c82fb4c4791a556270d929576) is 5.9M, max 47.3M, 41.4M free. May 12 23:41:54.364726 systemd[1]: Queued start job for default target multi-user.target. May 12 23:41:54.383714 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 23:41:54.384093 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 23:41:54.592519 systemd[1]: Started systemd-journald.service - Journal Service. May 12 23:41:54.593122 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 23:41:54.594437 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 23:41:54.595874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 23:41:54.597440 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 23:41:54.597573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 23:41:54.599007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:41:54.599149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:41:54.600576 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:41:54.600716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:41:54.602032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:41:54.602162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:41:54.603667 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 23:41:54.603808 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 23:41:54.605419 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:41:54.605551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:41:54.606932 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 23:41:54.608358 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 23:41:54.610036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 23:41:54.624044 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 23:41:54.639418 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 23:41:54.641810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 23:41:54.643055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 23:41:54.643099 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 23:41:54.645393 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 12 23:41:54.647831 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 23:41:54.650057 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 23:41:54.651383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:41:54.653038 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 23:41:54.655139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 23:41:54.656540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:41:54.660482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 23:41:54.661755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:41:54.664601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 23:41:54.665262 systemd-journald[1116]: Time spent on flushing to /var/log/journal/9181780c82fb4c4791a556270d929576 is 22.716ms for 856 entries. May 12 23:41:54.665262 systemd-journald[1116]: System Journal (/var/log/journal/9181780c82fb4c4791a556270d929576) is 8.0M, max 195.6M, 187.6M free. May 12 23:41:54.711134 systemd-journald[1116]: Received client request to flush runtime journal. May 12 23:41:54.711171 kernel: loop0: detected capacity change from 0 to 189592 May 12 23:41:54.668113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 23:41:54.672408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 23:41:54.677769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 23:41:54.681638 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 23:41:54.683000 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 23:41:54.684528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 23:41:54.686071 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 23:41:54.690203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 23:41:54.704619 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 12 23:41:54.708451 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 12 23:41:54.717694 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 23:41:54.721345 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 23:41:54.720675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 23:41:54.724561 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 12 23:41:54.724581 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 12 23:41:54.728664 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 23:41:54.738741 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 23:41:54.740493 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 12 23:41:54.745845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 23:41:54.746601 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 12 23:41:54.762424 kernel: loop1: detected capacity change from 0 to 113536 May 12 23:41:54.772022 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 23:41:54.781504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 23:41:54.796060 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 12 23:41:54.796077 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 12 23:41:54.800073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 23:41:54.809311 kernel: loop2: detected capacity change from 0 to 116808 May 12 23:41:54.842310 kernel: loop3: detected capacity change from 0 to 189592 May 12 23:41:54.852313 kernel: loop4: detected capacity change from 0 to 113536 May 12 23:41:54.860388 kernel: loop5: detected capacity change from 0 to 116808 May 12 23:41:54.864909 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 23:41:54.865655 (sd-merge)[1183]: Merged extensions into '/usr'. May 12 23:41:54.869447 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 12 23:41:54.869465 systemd[1]: Reloading... May 12 23:41:54.924425 zram_generator::config[1208]: No configuration found. May 12 23:41:55.022908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:41:55.039414 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 23:41:55.060722 systemd[1]: Reloading finished in 190 ms. May 12 23:41:55.089185 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 23:41:55.090839 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 23:41:55.103472 systemd[1]: Starting ensure-sysext.service... May 12 23:41:55.108552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 23:41:55.115428 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... May 12 23:41:55.115449 systemd[1]: Reloading... May 12 23:41:55.131044 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 23:41:55.131585 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 23:41:55.132250 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 23:41:55.132478 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 12 23:41:55.132519 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 12 23:41:55.135073 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:41:55.135193 systemd-tmpfiles[1244]: Skipping /boot May 12 23:41:55.142122 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 12 23:41:55.142228 systemd-tmpfiles[1244]: Skipping /boot May 12 23:41:55.174811 zram_generator::config[1270]: No configuration found. May 12 23:41:55.254492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:41:55.289654 systemd[1]: Reloading finished in 173 ms. May 12 23:41:55.301985 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 23:41:55.310724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 23:41:55.317734 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:41:55.320243 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 23:41:55.325426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 23:41:55.338106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 23:41:55.343080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 23:41:55.345427 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 23:41:55.353099 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 23:41:55.358973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:41:55.362418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:41:55.365532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:41:55.368941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:41:55.370999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:41:55.373340 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 23:41:55.376189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:41:55.376390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:41:55.377323 systemd-udevd[1317]: Using default interface naming scheme 'v255'. May 12 23:41:55.394038 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 23:41:55.395984 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:41:55.396108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:41:55.397846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:41:55.397985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:41:55.402244 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 23:41:55.404937 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 23:41:55.406524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 23:41:55.413157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:41:55.419682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:41:55.421602 augenrules[1363]: No rules May 12 23:41:55.426586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:41:55.430070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:41:55.433002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:41:55.434892 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 23:41:55.444547 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 23:41:55.445751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 23:41:55.446828 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:41:55.447000 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:41:55.448618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:41:55.448743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:41:55.450836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:41:55.451002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:41:55.452843 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:41:55.452992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:41:55.469240 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 23:41:55.474919 systemd[1]: Finished ensure-sysext.service. May 12 23:41:55.481628 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 12 23:41:55.484391 systemd-resolved[1316]: Positive Trust Anchors: May 12 23:41:55.507740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1353) May 12 23:41:55.484853 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 23:41:55.484888 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 23:41:55.505090 systemd-resolved[1316]: Defaulting to hostname 'linux'. May 12 23:41:55.507676 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 23:41:55.508883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 23:41:55.510077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 23:41:55.512623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 23:41:55.514901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 23:41:55.518572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 23:41:55.522508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 23:41:55.526497 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 23:41:55.528189 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 23:41:55.528630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 23:41:55.531050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 23:41:55.532035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 23:41:55.534636 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 23:41:55.534774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 23:41:55.536159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 23:41:55.536410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 23:41:55.538765 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 23:41:55.538931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 23:41:55.545194 augenrules[1385]: /sbin/augenrules: No change May 12 23:41:55.555654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 23:41:55.557194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 23:41:55.565856 augenrules[1416]: No rules May 12 23:41:55.568465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 23:41:55.570564 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 23:41:55.570646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 23:41:55.571003 systemd[1]: audit-rules.service: Deactivated successfully. May 12 23:41:55.571238 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 23:41:55.575474 systemd-networkd[1374]: lo: Link UP May 12 23:41:55.575481 systemd-networkd[1374]: lo: Gained carrier May 12 23:41:55.577009 systemd-networkd[1374]: Enumeration completed May 12 23:41:55.577119 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 23:41:55.578721 systemd[1]: Reached target network.target - Network. May 12 23:41:55.582404 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:41:55.582415 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 23:41:55.583161 systemd-networkd[1374]: eth0: Link UP May 12 23:41:55.583169 systemd-networkd[1374]: eth0: Gained carrier May 12 23:41:55.583182 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 23:41:55.583527 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 23:41:55.608339 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 23:41:55.612528 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 23:41:55.629477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 23:41:55.630111 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 23:41:55.630162 systemd-timesyncd[1399]: Initial clock synchronization to Mon 2025-05-12 23:41:55.391653 UTC. May 12 23:41:55.630947 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 23:41:55.632657 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 12 23:41:55.635198 systemd[1]: Reached target time-set.target - System Time Set. May 12 23:41:55.637672 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 12 23:41:55.653542 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:41:55.688376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 23:41:55.691331 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 12 23:41:55.693405 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 23:41:55.694656 systemd[1]: Reached target sysinit.target - System Initialization. May 12 23:41:55.695955 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 23:41:55.697377 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 23:41:55.701586 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 23:41:55.702874 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 23:41:55.704218 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 23:41:55.705553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 23:41:55.705609 systemd[1]: Reached target paths.target - Path Units. May 12 23:41:55.706693 systemd[1]: Reached target timers.target - Timer Units. May 12 23:41:55.709051 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 23:41:55.711612 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 23:41:55.723355 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 23:41:55.725779 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 12 23:41:55.727454 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 23:41:55.728764 systemd[1]: Reached target sockets.target - Socket Units. May 12 23:41:55.729822 systemd[1]: Reached target basic.target - Basic System. May 12 23:41:55.730857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 23:41:55.730893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 23:41:55.731936 systemd[1]: Starting containerd.service - containerd container runtime... May 12 23:41:55.734024 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 12 23:41:55.734106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 23:41:55.738460 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 23:41:55.740740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 23:41:55.742115 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 23:41:55.745484 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 23:41:55.746536 jq[1442]: false May 12 23:41:55.764447 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 23:41:55.766635 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 23:41:55.770553 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 23:41:55.771196 extend-filesystems[1443]: Found loop3 May 12 23:41:55.773964 extend-filesystems[1443]: Found loop4 May 12 23:41:55.773964 extend-filesystems[1443]: Found loop5 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda May 12 23:41:55.773964 extend-filesystems[1443]: Found vda1 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda2 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda3 May 12 23:41:55.773964 extend-filesystems[1443]: Found usr May 12 23:41:55.773964 extend-filesystems[1443]: Found vda4 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda6 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda7 May 12 23:41:55.773964 extend-filesystems[1443]: Found vda9 May 12 23:41:55.773964 extend-filesystems[1443]: Checking size of /dev/vda9 May 12 23:41:55.773534 dbus-daemon[1441]: [system] SELinux support is enabled May 12 23:41:55.792527 extend-filesystems[1443]: Resized partition /dev/vda9 May 12 23:41:55.774559 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 23:41:55.795173 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) May 12 23:41:55.803496 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1353) May 12 23:41:55.803528 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 23:41:55.779110 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 23:41:55.779626 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 23:41:55.781482 systemd[1]: Starting update-engine.service - Update Engine... May 12 23:41:55.784933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 23:41:55.786612 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 23:41:55.793501 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 12 23:41:55.797690 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 23:41:55.797839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 23:41:55.798106 systemd[1]: motdgen.service: Deactivated successfully. May 12 23:41:55.798236 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 23:41:55.805528 jq[1462]: true May 12 23:41:55.815676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 23:41:55.815882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 23:41:55.826304 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 23:41:55.834919 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 23:41:55.844454 update_engine[1458]: I20250512 23:41:55.838153 1458 main.cc:92] Flatcar Update Engine starting May 12 23:41:55.840164 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 23:41:55.840192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 23:41:55.841817 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 23:41:55.841833 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 23:41:55.844799 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 23:41:55.844799 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 23:41:55.844799 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 23:41:55.854419 extend-filesystems[1443]: Resized filesystem in /dev/vda9 May 12 23:41:55.856367 jq[1468]: true May 12 23:41:55.846250 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 23:41:55.860802 update_engine[1458]: I20250512 23:41:55.849495 1458 update_check_scheduler.cc:74] Next update check in 11m59s May 12 23:41:55.846850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 23:41:55.862491 tar[1467]: linux-arm64/helm May 12 23:41:55.858319 systemd[1]: Started update-engine.service - Update Engine. May 12 23:41:55.862940 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 23:41:55.870920 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (Power Button) May 12 23:41:55.871464 systemd-logind[1455]: New seat seat0. May 12 23:41:55.872353 systemd[1]: Started systemd-logind.service - User Login Management. May 12 23:41:55.955733 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 23:41:55.960948 bash[1497]: Updated "/home/core/.ssh/authorized_keys" May 12 23:41:55.963332 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 23:41:55.965828 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 23:41:56.060850 containerd[1477]: time="2025-05-12T23:41:56.060719565Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 12 23:41:56.088237 containerd[1477]: time="2025-05-12T23:41:56.088140460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.089704 containerd[1477]: time="2025-05-12T23:41:56.089636863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 12 23:41:56.089704 containerd[1477]: time="2025-05-12T23:41:56.089694997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 12 23:41:56.089814 containerd[1477]: time="2025-05-12T23:41:56.089714052Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 12 23:41:56.089984 containerd[1477]: time="2025-05-12T23:41:56.089949734Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 12 23:41:56.089984 containerd[1477]: time="2025-05-12T23:41:56.089976667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.090105 containerd[1477]: time="2025-05-12T23:41:56.090088085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:41:56.090132 containerd[1477]: time="2025-05-12T23:41:56.090106674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.090467 containerd[1477]: time="2025-05-12T23:41:56.090422921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:41:56.090467 containerd[1477]: time="2025-05-12T23:41:56.090457926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.090515 containerd[1477]: time="2025-05-12T23:41:56.090475041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:41:56.090515 containerd[1477]: time="2025-05-12T23:41:56.090485247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.090650 containerd[1477]: time="2025-05-12T23:41:56.090574428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.091005 containerd[1477]: time="2025-05-12T23:41:56.090923818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 12 23:41:56.091126 containerd[1477]: time="2025-05-12T23:41:56.091102141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 12 23:41:56.091195 containerd[1477]: time="2025-05-12T23:41:56.091179175Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 12 23:41:56.091310 containerd[1477]: time="2025-05-12T23:41:56.091296725Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 12 23:41:56.091419 containerd[1477]: time="2025-05-12T23:41:56.091352182Z" level=info msg="metadata content store policy set" policy=shared May 12 23:41:56.096160 containerd[1477]: time="2025-05-12T23:41:56.096120227Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 12 23:41:56.096211 containerd[1477]: time="2025-05-12T23:41:56.096175994Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 12 23:41:56.096211 containerd[1477]: time="2025-05-12T23:41:56.096192216Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 12 23:41:56.096250 containerd[1477]: time="2025-05-12T23:41:56.096210339Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 12 23:41:56.096250 containerd[1477]: time="2025-05-12T23:41:56.096225591Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 12 23:41:56.096417 containerd[1477]: time="2025-05-12T23:41:56.096387537Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 12 23:41:56.096637 containerd[1477]: time="2025-05-12T23:41:56.096611538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 12 23:41:56.096736 containerd[1477]: time="2025-05-12T23:41:56.096711740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 12 23:41:56.096764 containerd[1477]: time="2025-05-12T23:41:56.096738828Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 12 23:41:56.096764 containerd[1477]: time="2025-05-12T23:41:56.096753537Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 12 23:41:56.096797 containerd[1477]: time="2025-05-12T23:41:56.096766848Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096797 containerd[1477]: time="2025-05-12T23:41:56.096778878Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096797 containerd[1477]: time="2025-05-12T23:41:56.096789667Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096844 containerd[1477]: time="2025-05-12T23:41:56.096802978Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096844 containerd[1477]: time="2025-05-12T23:41:56.096816445Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096844 containerd[1477]: time="2025-05-12T23:41:56.096828242Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096844 containerd[1477]: time="2025-05-12T23:41:56.096839225Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096910 containerd[1477]: time="2025-05-12T23:41:56.096851139Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 12 23:41:56.096910 containerd[1477]: time="2025-05-12T23:41:56.096871979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 12 23:41:56.096910 containerd[1477]: time="2025-05-12T23:41:56.096884514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 12 23:41:56.096910 containerd[1477]: time="2025-05-12T23:41:56.096895807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 12 23:41:56.096910 containerd[1477]: time="2025-05-12T23:41:56.096907838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096918665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096931627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096952040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096965856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096977809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.096990577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097004 containerd[1477]: time="2025-05-12T23:41:56.097002413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097014599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097026552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097039203Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097064312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097076886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097121 containerd[1477]: time="2025-05-12T23:41:56.097087015Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 12 23:41:56.097269 containerd[1477]: time="2025-05-12T23:41:56.097255132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 12 23:41:56.097305 containerd[1477]: time="2025-05-12T23:41:56.097292543Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 12 23:41:56.097333 containerd[1477]: time="2025-05-12T23:41:56.097306087Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 12 23:41:56.097333 containerd[1477]: time="2025-05-12T23:41:56.097318079Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 12 23:41:56.097333 containerd[1477]: time="2025-05-12T23:41:56.097328324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097382 containerd[1477]: time="2025-05-12T23:41:56.097340083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 12 23:41:56.097382 containerd[1477]: time="2025-05-12T23:41:56.097349746Z" level=info msg="NRI interface is disabled by configuration." May 12 23:41:56.097382 containerd[1477]: time="2025-05-12T23:41:56.097361699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 12 23:41:56.097745 containerd[1477]: time="2025-05-12T23:41:56.097690714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 12 23:41:56.097745 containerd[1477]: time="2025-05-12T23:41:56.097739069Z" level=info msg="Connect containerd service" May 12 23:41:56.097868 containerd[1477]: time="2025-05-12T23:41:56.097772289Z" level=info msg="using legacy CRI server" May 12 23:41:56.097868 containerd[1477]: time="2025-05-12T23:41:56.097779042Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 23:41:56.098033 containerd[1477]: time="2025-05-12T23:41:56.098005526Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 12 23:41:56.100563 containerd[1477]: time="2025-05-12T23:41:56.100527783Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 23:41:56.100907 containerd[1477]: time="2025-05-12T23:41:56.100751667Z" level=info msg="Start subscribing containerd event" May 12 23:41:56.100907 containerd[1477]: time="2025-05-12T23:41:56.100797305Z" level=info msg="Start recovering state" May 12 23:41:56.100907 containerd[1477]: time="2025-05-12T23:41:56.100878143Z" level=info msg="Start event monitor" May 12 23:41:56.101033 containerd[1477]: time="2025-05-12T23:41:56.101018512Z" level=info msg="Start snapshots syncer" May 12 23:41:56.101346 containerd[1477]: time="2025-05-12T23:41:56.101070320Z" level=info msg="Start cni network conf syncer for default" May 12 23:41:56.101346 containerd[1477]: time="2025-05-12T23:41:56.101082118Z" level=info msg="Start streaming server" May 12 23:41:56.101749 containerd[1477]: time="2025-05-12T23:41:56.101725091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 23:41:56.101874 containerd[1477]: time="2025-05-12T23:41:56.101860066Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 23:41:56.104348 containerd[1477]: time="2025-05-12T23:41:56.104328496Z" level=info msg="containerd successfully booted in 0.044656s" May 12 23:41:56.104469 systemd[1]: Started containerd.service - containerd container runtime. May 12 23:41:56.198269 tar[1467]: linux-arm64/LICENSE May 12 23:41:56.198377 tar[1467]: linux-arm64/README.md May 12 23:41:56.214934 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 23:41:56.350620 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 23:41:56.538460 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 23:41:56.556806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 23:41:56.567572 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 23:41:56.569619 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:48908.service - OpenSSH per-connection server daemon (10.0.0.1:48908). May 12 23:41:56.573481 systemd[1]: issuegen.service: Deactivated successfully. May 12 23:41:56.574380 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 23:41:56.578114 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 23:41:56.591356 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 23:41:56.609790 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 23:41:56.612210 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 12 23:41:56.614353 systemd[1]: Reached target getty.target - Login Prompts. May 12 23:41:56.642580 sshd[1526]: Accepted publickey for core from 10.0.0.1 port 48908 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:41:56.644727 sshd-session[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:56.652533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 23:41:56.670620 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 23:41:56.676863 systemd-logind[1455]: New session 1 of user core. May 12 23:41:56.682335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 23:41:56.686905 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 23:41:56.700987 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 23:41:56.801550 systemd[1538]: Queued start job for default target default.target. May 12 23:41:56.811626 systemd[1538]: Created slice app.slice - User Application Slice. May 12 23:41:56.811656 systemd[1538]: Reached target paths.target - Paths. May 12 23:41:56.811668 systemd[1538]: Reached target timers.target - Timers. May 12 23:41:56.812992 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 23:41:56.829082 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 23:41:56.829200 systemd[1538]: Reached target sockets.target - Sockets. May 12 23:41:56.829218 systemd[1538]: Reached target basic.target - Basic System. May 12 23:41:56.829265 systemd[1538]: Reached target default.target - Main User Target. May 12 23:41:56.829300 systemd[1538]: Startup finished in 122ms. May 12 23:41:56.829477 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 23:41:56.832254 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 23:41:56.862454 systemd-networkd[1374]: eth0: Gained IPv6LL May 12 23:41:56.865133 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 23:41:56.866982 systemd[1]: Reached target network-online.target - Network is Online. May 12 23:41:56.883561 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 23:41:56.887096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:41:56.890072 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 23:41:56.909402 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:48924.service - OpenSSH per-connection server daemon (10.0.0.1:48924). May 12 23:41:56.918537 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 23:41:56.920322 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 23:41:56.922920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 23:41:56.928847 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 23:41:56.957470 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 48924 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:41:56.958842 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:56.963827 systemd-logind[1455]: New session 2 of user core. May 12 23:41:56.973490 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 23:41:57.024601 sshd[1568]: Connection closed by 10.0.0.1 port 48924 May 12 23:41:57.025077 sshd-session[1558]: pam_unix(sshd:session): session closed for user core May 12 23:41:57.036837 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:48924.service: Deactivated successfully. May 12 23:41:57.038628 systemd[1]: session-2.scope: Deactivated successfully. May 12 23:41:57.040411 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. May 12 23:41:57.048762 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). May 12 23:41:57.051689 systemd-logind[1455]: Removed session 2. May 12 23:41:57.089235 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:41:57.090586 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:41:57.095480 systemd-logind[1455]: New session 3 of user core. May 12 23:41:57.114480 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 23:41:57.165703 sshd[1575]: Connection closed by 10.0.0.1 port 48932 May 12 23:41:57.166316 sshd-session[1573]: pam_unix(sshd:session): session closed for user core May 12 23:41:57.168876 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:48932.service: Deactivated successfully. May 12 23:41:57.172425 systemd[1]: session-3.scope: Deactivated successfully. May 12 23:41:57.173535 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. May 12 23:41:57.174360 systemd-logind[1455]: Removed session 3. May 12 23:41:57.490695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:41:57.492231 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 23:41:57.493936 systemd[1]: Startup finished in 643ms (kernel) + 10.236s (initrd) + 3.543s (userspace) = 14.422s. May 12 23:41:57.494913 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:41:57.957351 kubelet[1584]: E0512 23:41:57.957221 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:41:57.959760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:41:57.959903 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:42:07.023169 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:32828.service - OpenSSH per-connection server daemon (10.0.0.1:32828). May 12 23:42:07.066849 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 32828 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:42:07.068117 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:42:07.071943 systemd-logind[1455]: New session 4 of user core. May 12 23:42:07.092472 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 23:42:07.144227 sshd[1599]: Connection closed by 10.0.0.1 port 32828 May 12 23:42:07.144594 sshd-session[1597]: pam_unix(sshd:session): session closed for user core May 12 23:42:07.157423 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:32828.service: Deactivated successfully. May 12 23:42:07.158797 systemd[1]: session-4.scope: Deactivated successfully. May 12 23:42:07.161383 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. May 12 23:42:07.172579 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). May 12 23:42:07.173490 systemd-logind[1455]: Removed session 4. May 12 23:42:07.215244 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:42:07.216532 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:42:07.220646 systemd-logind[1455]: New session 5 of user core. May 12 23:42:07.231468 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 23:42:07.279666 sshd[1606]: Connection closed by 10.0.0.1 port 32836 May 12 23:42:07.280234 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 12 23:42:07.295774 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:32836.service: Deactivated successfully. May 12 23:42:07.297247 systemd[1]: session-5.scope: Deactivated successfully. May 12 23:42:07.299567 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 12 23:42:07.301105 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:32838.service - OpenSSH per-connection server daemon (10.0.0.1:32838). May 12 23:42:07.301849 systemd-logind[1455]: Removed session 5. May 12 23:42:07.347426 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 32838 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:42:07.348725 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:42:07.352353 systemd-logind[1455]: New session 6 of user core. May 12 23:42:07.365516 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 23:42:07.416477 sshd[1613]: Connection closed by 10.0.0.1 port 32838 May 12 23:42:07.416837 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 12 23:42:07.432563 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:32838.service: Deactivated successfully. May 12 23:42:07.433860 systemd[1]: session-6.scope: Deactivated successfully. May 12 23:42:07.436363 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 12 23:42:07.437417 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). May 12 23:42:07.438119 systemd-logind[1455]: Removed session 6. May 12 23:42:07.481555 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:42:07.482803 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:42:07.486618 systemd-logind[1455]: New session 7 of user core. May 12 23:42:07.496520 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 23:42:07.569719 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 23:42:07.570075 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 23:42:07.945544 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 23:42:07.946010 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 23:42:08.190479 dockerd[1641]: time="2025-05-12T23:42:08.190430180Z" level=info msg="Starting up" May 12 23:42:08.191653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 23:42:08.201264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:08.316346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:08.320396 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:42:08.359129 kubelet[1673]: E0512 23:42:08.359062 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:42:08.361423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:42:08.361558 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:42:08.366561 systemd[1]: var-lib-docker-metacopy\x2dcheck4101294247-merged.mount: Deactivated successfully. May 12 23:42:08.375649 dockerd[1641]: time="2025-05-12T23:42:08.375551932Z" level=info msg="Loading containers: start." May 12 23:42:08.515355 kernel: Initializing XFRM netlink socket May 12 23:42:08.580733 systemd-networkd[1374]: docker0: Link UP May 12 23:42:08.621494 dockerd[1641]: time="2025-05-12T23:42:08.621456647Z" level=info msg="Loading containers: done." May 12 23:42:08.636151 dockerd[1641]: time="2025-05-12T23:42:08.635806725Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 23:42:08.636151 dockerd[1641]: time="2025-05-12T23:42:08.635889823Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 12 23:42:08.636151 dockerd[1641]: time="2025-05-12T23:42:08.635977096Z" level=info msg="Daemon has completed initialization" May 12 23:42:08.663233 dockerd[1641]: time="2025-05-12T23:42:08.663179771Z" level=info msg="API listen on /run/docker.sock" May 12 23:42:08.663481 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 23:42:09.354190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck965848299-merged.mount: Deactivated successfully. May 12 23:42:09.420478 containerd[1477]: time="2025-05-12T23:42:09.420431052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 12 23:42:10.174927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593715487.mount: Deactivated successfully. May 12 23:42:11.681253 containerd[1477]: time="2025-05-12T23:42:11.681139803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:11.682439 containerd[1477]: time="2025-05-12T23:42:11.682360754Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 12 23:42:11.683520 containerd[1477]: time="2025-05-12T23:42:11.683491629Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:11.687484 containerd[1477]: time="2025-05-12T23:42:11.687406028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:11.688233 containerd[1477]: time="2025-05-12T23:42:11.688185601Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.267703606s" May 12 23:42:11.688233 containerd[1477]: time="2025-05-12T23:42:11.688225758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 12 23:42:11.688906 containerd[1477]: time="2025-05-12T23:42:11.688880236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 12 23:42:13.083230 containerd[1477]: time="2025-05-12T23:42:13.083182722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:13.085125 containerd[1477]: time="2025-05-12T23:42:13.084947991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 12 23:42:13.086289 containerd[1477]: time="2025-05-12T23:42:13.085905236Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:13.089117 containerd[1477]: time="2025-05-12T23:42:13.089063085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:13.091044 containerd[1477]: time="2025-05-12T23:42:13.090330452Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.401342121s" May 12 23:42:13.091044 containerd[1477]: time="2025-05-12T23:42:13.090370249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 12 23:42:13.091044 containerd[1477]: time="2025-05-12T23:42:13.090917718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 12 23:42:14.335444 containerd[1477]: time="2025-05-12T23:42:14.335392387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:14.336456 containerd[1477]: time="2025-05-12T23:42:14.336409001Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 12 23:42:14.337229 containerd[1477]: time="2025-05-12T23:42:14.337171940Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:14.340096 containerd[1477]: time="2025-05-12T23:42:14.340066523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:14.341483 containerd[1477]: time="2025-05-12T23:42:14.341221803Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.250273217s" May 12 23:42:14.341483 containerd[1477]: time="2025-05-12T23:42:14.341259262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 12 23:42:14.342092 containerd[1477]: time="2025-05-12T23:42:14.341968945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 12 23:42:15.372882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758733262.mount: Deactivated successfully. May 12 23:42:15.679274 containerd[1477]: time="2025-05-12T23:42:15.679134726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:15.679948 containerd[1477]: time="2025-05-12T23:42:15.679826013Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 12 23:42:15.680774 containerd[1477]: time="2025-05-12T23:42:15.680721577Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:15.682486 containerd[1477]: time="2025-05-12T23:42:15.682433693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:15.683313 containerd[1477]: time="2025-05-12T23:42:15.683239589Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.341202584s" May 12 23:42:15.683313 containerd[1477]: time="2025-05-12T23:42:15.683289112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 12 23:42:15.683786 containerd[1477]: time="2025-05-12T23:42:15.683751859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 12 23:42:16.290113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747164761.mount: Deactivated successfully. May 12 23:42:17.000558 containerd[1477]: time="2025-05-12T23:42:17.000499968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.001342 containerd[1477]: time="2025-05-12T23:42:17.001299756Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 12 23:42:17.001939 containerd[1477]: time="2025-05-12T23:42:17.001897835Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.006205 containerd[1477]: time="2025-05-12T23:42:17.006156657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.007319 containerd[1477]: time="2025-05-12T23:42:17.006936886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.323142799s" May 12 23:42:17.007319 containerd[1477]: time="2025-05-12T23:42:17.006993983Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 12 23:42:17.007644 containerd[1477]: time="2025-05-12T23:42:17.007621648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 12 23:42:17.453221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190748711.mount: Deactivated successfully. May 12 23:42:17.459892 containerd[1477]: time="2025-05-12T23:42:17.459815187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.460786 containerd[1477]: time="2025-05-12T23:42:17.460734605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 12 23:42:17.462313 containerd[1477]: time="2025-05-12T23:42:17.462008023Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.464497 containerd[1477]: time="2025-05-12T23:42:17.464433319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:17.465325 containerd[1477]: time="2025-05-12T23:42:17.465264177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.613941ms" May 12 23:42:17.465371 containerd[1477]: time="2025-05-12T23:42:17.465324069Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 12 23:42:17.465962 containerd[1477]: time="2025-05-12T23:42:17.465928576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 12 23:42:18.002841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486055008.mount: Deactivated successfully. May 12 23:42:18.611829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 23:42:18.624517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:18.722127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:18.724671 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 23:42:18.764782 kubelet[2028]: E0512 23:42:18.764721 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 23:42:18.767650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 23:42:18.767946 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 23:42:21.012266 containerd[1477]: time="2025-05-12T23:42:21.012202545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:21.016065 containerd[1477]: time="2025-05-12T23:42:21.015998366Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 12 23:42:21.018594 containerd[1477]: time="2025-05-12T23:42:21.017345779Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:21.020449 containerd[1477]: time="2025-05-12T23:42:21.020400065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:21.022745 containerd[1477]: time="2025-05-12T23:42:21.022597858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.556630186s" May 12 23:42:21.022745 containerd[1477]: time="2025-05-12T23:42:21.022641892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 12 23:42:26.111407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:26.122782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:26.145856 systemd[1]: Reloading requested from client PID 2071 ('systemctl') (unit session-7.scope)... May 12 23:42:26.145872 systemd[1]: Reloading... May 12 23:42:26.219328 zram_generator::config[2110]: No configuration found. May 12 23:42:26.326603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:42:26.379419 systemd[1]: Reloading finished in 233 ms. May 12 23:42:26.418978 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:26.422613 systemd[1]: kubelet.service: Deactivated successfully. May 12 23:42:26.422847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:26.424526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:26.517107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:26.521585 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:42:26.561426 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:42:26.561426 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:42:26.561426 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:42:26.561778 kubelet[2157]: I0512 23:42:26.561689 2157 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:42:27.102698 kubelet[2157]: I0512 23:42:27.102650 2157 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 12 23:42:27.102698 kubelet[2157]: I0512 23:42:27.102687 2157 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:42:27.102972 kubelet[2157]: I0512 23:42:27.102947 2157 server.go:929] "Client rotation is on, will bootstrap in background" May 12 23:42:27.130477 kubelet[2157]: E0512 23:42:27.130436 2157 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:27.131628 kubelet[2157]: I0512 23:42:27.131589 2157 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:42:27.142640 kubelet[2157]: E0512 23:42:27.142587 2157 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 12 23:42:27.142640 kubelet[2157]: I0512 23:42:27.142631 2157 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 12 23:42:27.146063 kubelet[2157]: I0512 23:42:27.146036 2157 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:42:27.146737 kubelet[2157]: I0512 23:42:27.146707 2157 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 12 23:42:27.146887 kubelet[2157]: I0512 23:42:27.146851 2157 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:42:27.147078 kubelet[2157]: I0512 23:42:27.146884 2157 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 23:42:27.147216 kubelet[2157]: I0512 23:42:27.147205 2157 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:42:27.147216 kubelet[2157]: I0512 23:42:27.147216 2157 container_manager_linux.go:300] "Creating device plugin manager" May 12 23:42:27.147426 kubelet[2157]: I0512 23:42:27.147404 2157 state_mem.go:36] "Initialized new in-memory state store" May 12 23:42:27.149155 kubelet[2157]: I0512 23:42:27.149125 2157 kubelet.go:408] "Attempting to sync node with API server" May 12 23:42:27.149183 kubelet[2157]: I0512 23:42:27.149156 2157 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:42:27.149262 kubelet[2157]: I0512 23:42:27.149246 2157 kubelet.go:314] "Adding apiserver pod source" May 12 23:42:27.149262 kubelet[2157]: I0512 23:42:27.149261 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:42:27.152155 kubelet[2157]: W0512 23:42:27.151107 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:27.152155 kubelet[2157]: E0512 23:42:27.151173 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:27.153037 kubelet[2157]: I0512 23:42:27.153005 2157 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:42:27.154406 kubelet[2157]: W0512 23:42:27.154338 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:27.154406 kubelet[2157]: E0512 23:42:27.154406 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:27.154866 kubelet[2157]: I0512 23:42:27.154835 2157 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:42:27.155586 kubelet[2157]: W0512 23:42:27.155536 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 23:42:27.156238 kubelet[2157]: I0512 23:42:27.156215 2157 server.go:1269] "Started kubelet" May 12 23:42:27.157219 kubelet[2157]: I0512 23:42:27.157169 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:42:27.157559 kubelet[2157]: I0512 23:42:27.157487 2157 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:42:27.157613 kubelet[2157]: I0512 23:42:27.157562 2157 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:42:27.158816 kubelet[2157]: I0512 23:42:27.158785 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:42:27.158816 kubelet[2157]: I0512 23:42:27.158795 2157 server.go:460] "Adding debug handlers to kubelet server" May 12 23:42:27.158899 kubelet[2157]: I0512 23:42:27.158839 2157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 23:42:27.159961 kubelet[2157]: I0512 23:42:27.159942 2157 volume_manager.go:289] "Starting Kubelet Volume Manager" May 12 23:42:27.160031 kubelet[2157]: I0512 23:42:27.160019 2157 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 12 23:42:27.160074 kubelet[2157]: I0512 23:42:27.160068 2157 reconciler.go:26] "Reconciler: start to sync state" May 12 23:42:27.160550 kubelet[2157]: W0512 23:42:27.160407 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:27.160550 kubelet[2157]: E0512 23:42:27.160451 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:27.160654 kubelet[2157]: E0512 23:42:27.160580 2157 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 23:42:27.160756 kubelet[2157]: I0512 23:42:27.160734 2157 factory.go:221] Registration of the systemd container factory successfully May 12 23:42:27.160835 kubelet[2157]: I0512 23:42:27.160806 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:42:27.162455 kubelet[2157]: I0512 23:42:27.162372 2157 factory.go:221] Registration of the containerd container factory successfully May 12 23:42:27.163213 kubelet[2157]: E0512 23:42:27.163166 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:42:27.164191 kubelet[2157]: E0512 23:42:27.163149 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eec248ab15ade default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 23:42:27.15618787 +0000 UTC m=+0.631282332,LastTimestamp:2025-05-12 23:42:27.15618787 +0000 UTC m=+0.631282332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 23:42:27.164290 kubelet[2157]: E0512 23:42:27.164251 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" May 12 23:42:27.176783 kubelet[2157]: I0512 23:42:27.176755 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:42:27.176783 kubelet[2157]: I0512 23:42:27.176777 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:42:27.176892 kubelet[2157]: I0512 23:42:27.176793 2157 state_mem.go:36] "Initialized new in-memory state store" May 12 23:42:27.178105 kubelet[2157]: I0512 23:42:27.178059 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:42:27.179343 kubelet[2157]: I0512 23:42:27.179208 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:42:27.179343 kubelet[2157]: I0512 23:42:27.179235 2157 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:42:27.179343 kubelet[2157]: I0512 23:42:27.179252 2157 kubelet.go:2321] "Starting kubelet main sync loop" May 12 23:42:27.179343 kubelet[2157]: E0512 23:42:27.179317 2157 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:42:27.179663 kubelet[2157]: I0512 23:42:27.179627 2157 policy_none.go:49] "None policy: Start" May 12 23:42:27.180678 kubelet[2157]: W0512 23:42:27.179962 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:27.180678 kubelet[2157]: E0512 23:42:27.180002 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:27.180764 kubelet[2157]: I0512 23:42:27.180732 2157 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:42:27.180764 kubelet[2157]: I0512 23:42:27.180752 2157 state_mem.go:35] "Initializing new in-memory state store" May 12 23:42:27.187377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 23:42:27.206082 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 23:42:27.209640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 23:42:27.222327 kubelet[2157]: I0512 23:42:27.222267 2157 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:42:27.222783 kubelet[2157]: I0512 23:42:27.222751 2157 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 23:42:27.222828 kubelet[2157]: I0512 23:42:27.222773 2157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:42:27.223160 kubelet[2157]: I0512 23:42:27.223129 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:42:27.224461 kubelet[2157]: E0512 23:42:27.224435 2157 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 23:42:27.287747 systemd[1]: Created slice kubepods-burstable-pod49f06b124d2ee413a7532fbe18d41b27.slice - libcontainer container kubepods-burstable-pod49f06b124d2ee413a7532fbe18d41b27.slice. May 12 23:42:27.320552 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 12 23:42:27.324701 kubelet[2157]: I0512 23:42:27.324273 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:27.324701 kubelet[2157]: E0512 23:42:27.324667 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" May 12 23:42:27.338875 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 12 23:42:27.364902 kubelet[2157]: E0512 23:42:27.364786 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" May 12 23:42:27.461174 kubelet[2157]: I0512 23:42:27.461136 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:27.461174 kubelet[2157]: I0512 23:42:27.461179 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:27.461315 kubelet[2157]: I0512 23:42:27.461197 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:27.461315 kubelet[2157]: I0512 23:42:27.461214 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:27.461315 kubelet[2157]: I0512 23:42:27.461235 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:27.461315 kubelet[2157]: I0512 23:42:27.461249 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 12 23:42:27.461315 kubelet[2157]: I0512 23:42:27.461266 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:27.461422 kubelet[2157]: I0512 23:42:27.461301 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:27.461422 kubelet[2157]: I0512 23:42:27.461318 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:27.526325 kubelet[2157]: I0512 23:42:27.526296 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:27.526684 kubelet[2157]: E0512 23:42:27.526638 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" May 12 23:42:27.619200 kubelet[2157]: E0512 23:42:27.619095 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:27.619908 containerd[1477]: time="2025-05-12T23:42:27.619853744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49f06b124d2ee413a7532fbe18d41b27,Namespace:kube-system,Attempt:0,}" May 12 23:42:27.637127 kubelet[2157]: E0512 23:42:27.637090 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:27.637631 containerd[1477]: time="2025-05-12T23:42:27.637592080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 12 23:42:27.641373 kubelet[2157]: E0512 23:42:27.641270 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:27.641730 containerd[1477]: time="2025-05-12T23:42:27.641697850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 12 23:42:27.765823 kubelet[2157]: E0512 23:42:27.765769 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" May 12 23:42:27.928572 kubelet[2157]: I0512 23:42:27.928455 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:27.928839 kubelet[2157]: E0512 23:42:27.928791 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" May 12 23:42:28.020861 kubelet[2157]: W0512 23:42:28.020751 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:28.020861 kubelet[2157]: E0512 23:42:28.020821 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:28.118614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830868491.mount: Deactivated successfully. May 12 23:42:28.124266 containerd[1477]: time="2025-05-12T23:42:28.124221204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:42:28.125580 containerd[1477]: time="2025-05-12T23:42:28.125446788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 12 23:42:28.130580 containerd[1477]: time="2025-05-12T23:42:28.130515380Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:42:28.131984 containerd[1477]: time="2025-05-12T23:42:28.131944742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:42:28.132709 containerd[1477]: time="2025-05-12T23:42:28.132666043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:42:28.133129 containerd[1477]: time="2025-05-12T23:42:28.133016913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:42:28.134007 containerd[1477]: time="2025-05-12T23:42:28.133976875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 12 23:42:28.136867 containerd[1477]: time="2025-05-12T23:42:28.136223226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 23:42:28.138011 containerd[1477]: time="2025-05-12T23:42:28.137526137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.582029ms" May 12 23:42:28.139119 containerd[1477]: time="2025-05-12T23:42:28.139054588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.199207ms" May 12 23:42:28.143766 containerd[1477]: time="2025-05-12T23:42:28.143409239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.735352ms" May 12 23:42:28.151962 kubelet[2157]: W0512 23:42:28.151884 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:28.151962 kubelet[2157]: E0512 23:42:28.151957 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:28.312975 containerd[1477]: time="2025-05-12T23:42:28.312680658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:28.312975 containerd[1477]: time="2025-05-12T23:42:28.312751504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:28.312975 containerd[1477]: time="2025-05-12T23:42:28.312768906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.313199 containerd[1477]: time="2025-05-12T23:42:28.312847392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.317095 containerd[1477]: time="2025-05-12T23:42:28.316980784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:28.317095 containerd[1477]: time="2025-05-12T23:42:28.317048670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:28.317095 containerd[1477]: time="2025-05-12T23:42:28.317062151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.317290 containerd[1477]: time="2025-05-12T23:42:28.317137878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.323510 containerd[1477]: time="2025-05-12T23:42:28.323169392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:28.323510 containerd[1477]: time="2025-05-12T23:42:28.323236157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:28.323510 containerd[1477]: time="2025-05-12T23:42:28.323248398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.323510 containerd[1477]: time="2025-05-12T23:42:28.323336846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:28.341491 systemd[1]: Started cri-containerd-8b1ae88a6771bce62c5bc87f54aab6a88218c02d30124c6f8c8e3d95eaf4322e.scope - libcontainer container 8b1ae88a6771bce62c5bc87f54aab6a88218c02d30124c6f8c8e3d95eaf4322e. May 12 23:42:28.345759 systemd[1]: Started cri-containerd-4f5cab59099a526d015607d21d05de370b11fb3fb1d8da0b573672316c7f9f85.scope - libcontainer container 4f5cab59099a526d015607d21d05de370b11fb3fb1d8da0b573672316c7f9f85. May 12 23:42:28.346946 systemd[1]: Started cri-containerd-be4c60908ff624249ed80c541d328ee8f33c30f5faee460d64f82e5d6644452a.scope - libcontainer container be4c60908ff624249ed80c541d328ee8f33c30f5faee460d64f82e5d6644452a. May 12 23:42:28.386702 containerd[1477]: time="2025-05-12T23:42:28.386661840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49f06b124d2ee413a7532fbe18d41b27,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4c60908ff624249ed80c541d328ee8f33c30f5faee460d64f82e5d6644452a\"" May 12 23:42:28.389653 containerd[1477]: time="2025-05-12T23:42:28.388355185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b1ae88a6771bce62c5bc87f54aab6a88218c02d30124c6f8c8e3d95eaf4322e\"" May 12 23:42:28.391086 kubelet[2157]: E0512 23:42:28.391059 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:28.391287 containerd[1477]: time="2025-05-12T23:42:28.391169264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f5cab59099a526d015607d21d05de370b11fb3fb1d8da0b573672316c7f9f85\"" May 12 23:42:28.391860 kubelet[2157]: E0512 23:42:28.391727 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:28.393061 kubelet[2157]: E0512 23:42:28.392954 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:28.395314 containerd[1477]: time="2025-05-12T23:42:28.395133482Z" level=info msg="CreateContainer within sandbox \"8b1ae88a6771bce62c5bc87f54aab6a88218c02d30124c6f8c8e3d95eaf4322e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 23:42:28.395373 containerd[1477]: time="2025-05-12T23:42:28.395320258Z" level=info msg="CreateContainer within sandbox \"be4c60908ff624249ed80c541d328ee8f33c30f5faee460d64f82e5d6644452a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 23:42:28.396683 containerd[1477]: time="2025-05-12T23:42:28.396587166Z" level=info msg="CreateContainer within sandbox \"4f5cab59099a526d015607d21d05de370b11fb3fb1d8da0b573672316c7f9f85\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 23:42:28.417749 containerd[1477]: time="2025-05-12T23:42:28.417531190Z" level=info msg="CreateContainer within sandbox \"4f5cab59099a526d015607d21d05de370b11fb3fb1d8da0b573672316c7f9f85\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"453bfcc93e3ebe29879744136bd1c4de76fe8613ec8272b36e81b5ed7b2e699a\"" May 12 23:42:28.418464 containerd[1477]: time="2025-05-12T23:42:28.418264012Z" level=info msg="StartContainer for \"453bfcc93e3ebe29879744136bd1c4de76fe8613ec8272b36e81b5ed7b2e699a\"" May 12 23:42:28.423968 containerd[1477]: time="2025-05-12T23:42:28.423912094Z" level=info msg="CreateContainer within sandbox \"8b1ae88a6771bce62c5bc87f54aab6a88218c02d30124c6f8c8e3d95eaf4322e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e37e81502a776f61f363b3bc0173561a29780c1d97c05e3fa2839157a8d78546\"" May 12 23:42:28.425045 containerd[1477]: time="2025-05-12T23:42:28.424506304Z" level=info msg="StartContainer for \"e37e81502a776f61f363b3bc0173561a29780c1d97c05e3fa2839157a8d78546\"" May 12 23:42:28.426580 containerd[1477]: time="2025-05-12T23:42:28.426463831Z" level=info msg="CreateContainer within sandbox \"be4c60908ff624249ed80c541d328ee8f33c30f5faee460d64f82e5d6644452a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"581962efb816bd5fe3945f15d9ed86f6bfd6260ae1754ccc26f6ea42a2b7ee32\"" May 12 23:42:28.427062 containerd[1477]: time="2025-05-12T23:42:28.427028799Z" level=info msg="StartContainer for \"581962efb816bd5fe3945f15d9ed86f6bfd6260ae1754ccc26f6ea42a2b7ee32\"" May 12 23:42:28.447533 systemd[1]: Started cri-containerd-453bfcc93e3ebe29879744136bd1c4de76fe8613ec8272b36e81b5ed7b2e699a.scope - libcontainer container 453bfcc93e3ebe29879744136bd1c4de76fe8613ec8272b36e81b5ed7b2e699a. May 12 23:42:28.451062 systemd[1]: Started cri-containerd-581962efb816bd5fe3945f15d9ed86f6bfd6260ae1754ccc26f6ea42a2b7ee32.scope - libcontainer container 581962efb816bd5fe3945f15d9ed86f6bfd6260ae1754ccc26f6ea42a2b7ee32. May 12 23:42:28.456758 systemd[1]: Started cri-containerd-e37e81502a776f61f363b3bc0173561a29780c1d97c05e3fa2839157a8d78546.scope - libcontainer container e37e81502a776f61f363b3bc0173561a29780c1d97c05e3fa2839157a8d78546. May 12 23:42:28.498912 containerd[1477]: time="2025-05-12T23:42:28.498867439Z" level=info msg="StartContainer for \"453bfcc93e3ebe29879744136bd1c4de76fe8613ec8272b36e81b5ed7b2e699a\" returns successfully" May 12 23:42:28.540274 containerd[1477]: time="2025-05-12T23:42:28.540218121Z" level=info msg="StartContainer for \"e37e81502a776f61f363b3bc0173561a29780c1d97c05e3fa2839157a8d78546\" returns successfully" May 12 23:42:28.540420 containerd[1477]: time="2025-05-12T23:42:28.540337051Z" level=info msg="StartContainer for \"581962efb816bd5fe3945f15d9ed86f6bfd6260ae1754ccc26f6ea42a2b7ee32\" returns successfully" May 12 23:42:28.567047 kubelet[2157]: E0512 23:42:28.566857 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" May 12 23:42:28.687930 kubelet[2157]: W0512 23:42:28.687855 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:28.688271 kubelet[2157]: E0512 23:42:28.687936 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:28.716706 kubelet[2157]: W0512 23:42:28.716632 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused May 12 23:42:28.716847 kubelet[2157]: E0512 23:42:28.716714 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" May 12 23:42:28.730134 kubelet[2157]: I0512 23:42:28.730099 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:28.730491 kubelet[2157]: E0512 23:42:28.730445 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" May 12 23:42:29.188533 kubelet[2157]: E0512 23:42:29.188486 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:29.190511 kubelet[2157]: E0512 23:42:29.190486 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:29.193307 kubelet[2157]: E0512 23:42:29.193261 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:30.194628 kubelet[2157]: E0512 23:42:30.194594 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:30.195169 kubelet[2157]: E0512 23:42:30.194626 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:30.332727 kubelet[2157]: I0512 23:42:30.332367 2157 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:30.705184 kubelet[2157]: E0512 23:42:30.705120 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 23:42:30.842764 kubelet[2157]: I0512 23:42:30.842552 2157 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 12 23:42:30.842764 kubelet[2157]: E0512 23:42:30.842597 2157 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 12 23:42:31.152629 kubelet[2157]: I0512 23:42:31.151270 2157 apiserver.go:52] "Watching apiserver" May 12 23:42:31.160288 kubelet[2157]: I0512 23:42:31.160254 2157 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 12 23:42:31.743816 kubelet[2157]: E0512 23:42:31.743738 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:32.197477 kubelet[2157]: E0512 23:42:32.197323 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:33.056776 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... May 12 23:42:33.056795 systemd[1]: Reloading... May 12 23:42:33.139435 zram_generator::config[2473]: No configuration found. May 12 23:42:33.334313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 23:42:33.399831 systemd[1]: Reloading finished in 342 ms. May 12 23:42:33.434836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:33.442667 systemd[1]: kubelet.service: Deactivated successfully. May 12 23:42:33.442899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:33.442962 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 119.8M memory peak, 0B memory swap peak. May 12 23:42:33.452576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 23:42:33.548010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 23:42:33.553048 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 23:42:33.594015 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:42:33.594015 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 23:42:33.594015 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 23:42:33.594387 kubelet[2515]: I0512 23:42:33.594000 2515 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 23:42:33.601901 kubelet[2515]: I0512 23:42:33.600137 2515 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 12 23:42:33.601901 kubelet[2515]: I0512 23:42:33.600174 2515 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 23:42:33.601901 kubelet[2515]: I0512 23:42:33.600450 2515 server.go:929] "Client rotation is on, will bootstrap in background" May 12 23:42:33.602096 kubelet[2515]: I0512 23:42:33.601933 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 23:42:33.603984 kubelet[2515]: I0512 23:42:33.603952 2515 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 23:42:33.609733 kubelet[2515]: E0512 23:42:33.609666 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 12 23:42:33.609733 kubelet[2515]: I0512 23:42:33.609715 2515 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 12 23:42:33.612472 kubelet[2515]: I0512 23:42:33.612405 2515 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 23:42:33.612680 kubelet[2515]: I0512 23:42:33.612536 2515 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 12 23:42:33.612680 kubelet[2515]: I0512 23:42:33.612647 2515 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 23:42:33.612857 kubelet[2515]: I0512 23:42:33.612672 2515 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 23:42:33.612935 kubelet[2515]: I0512 23:42:33.612866 2515 topology_manager.go:138] "Creating topology manager with none policy" May 12 23:42:33.612935 kubelet[2515]: I0512 23:42:33.612875 2515 container_manager_linux.go:300] "Creating device plugin manager" May 12 23:42:33.612935 kubelet[2515]: I0512 23:42:33.612905 2515 state_mem.go:36] "Initialized new in-memory state store" May 12 23:42:33.613128 kubelet[2515]: I0512 23:42:33.613025 2515 kubelet.go:408] "Attempting to sync node with API server" May 12 23:42:33.613128 kubelet[2515]: I0512 23:42:33.613125 2515 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 23:42:33.613204 kubelet[2515]: I0512 23:42:33.613155 2515 kubelet.go:314] "Adding apiserver pod source" May 12 23:42:33.613204 kubelet[2515]: I0512 23:42:33.613181 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.615260 2515 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.615791 2515 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.616206 2515 server.go:1269] "Started kubelet" May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.620480 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.620828 2515 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 23:42:33.623457 kubelet[2515]: I0512 23:42:33.621897 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 23:42:33.629518 kubelet[2515]: I0512 23:42:33.629472 2515 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 12 23:42:33.629767 kubelet[2515]: I0512 23:42:33.629710 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 23:42:33.631029 kubelet[2515]: I0512 23:42:33.631003 2515 server.go:460] "Adding debug handlers to kubelet server" May 12 23:42:33.632664 kubelet[2515]: E0512 23:42:33.632176 2515 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 23:42:33.633370 kubelet[2515]: E0512 23:42:33.631073 2515 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 23:42:33.637269 kubelet[2515]: I0512 23:42:33.636867 2515 factory.go:221] Registration of the systemd container factory successfully May 12 23:42:33.637269 kubelet[2515]: I0512 23:42:33.637015 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 23:42:33.637737 kubelet[2515]: I0512 23:42:33.637701 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 23:42:33.642531 kubelet[2515]: I0512 23:42:33.642398 2515 volume_manager.go:289] "Starting Kubelet Volume Manager" May 12 23:42:33.646360 kubelet[2515]: I0512 23:42:33.645711 2515 reconciler.go:26] "Reconciler: start to sync state" May 12 23:42:33.647490 kubelet[2515]: I0512 23:42:33.647462 2515 factory.go:221] Registration of the containerd container factory successfully May 12 23:42:33.652197 kubelet[2515]: I0512 23:42:33.652140 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 23:42:33.654114 kubelet[2515]: I0512 23:42:33.654048 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 23:42:33.654232 kubelet[2515]: I0512 23:42:33.654145 2515 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 23:42:33.654232 kubelet[2515]: I0512 23:42:33.654175 2515 kubelet.go:2321] "Starting kubelet main sync loop" May 12 23:42:33.654309 kubelet[2515]: E0512 23:42:33.654225 2515 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 23:42:33.692172 kubelet[2515]: I0512 23:42:33.692145 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 23:42:33.692668 kubelet[2515]: I0512 23:42:33.692358 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 23:42:33.692668 kubelet[2515]: I0512 23:42:33.692387 2515 state_mem.go:36] "Initialized new in-memory state store" May 12 23:42:33.692668 kubelet[2515]: I0512 23:42:33.692543 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 23:42:33.692668 kubelet[2515]: I0512 23:42:33.692555 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 23:42:33.692668 kubelet[2515]: I0512 23:42:33.692573 2515 policy_none.go:49] "None policy: Start" May 12 23:42:33.693627 kubelet[2515]: I0512 23:42:33.693608 2515 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 23:42:33.693738 kubelet[2515]: I0512 23:42:33.693728 2515 state_mem.go:35] "Initializing new in-memory state store" May 12 23:42:33.694092 kubelet[2515]: I0512 23:42:33.694074 2515 state_mem.go:75] "Updated machine memory state" May 12 23:42:33.699119 kubelet[2515]: I0512 23:42:33.699089 2515 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 23:42:33.699793 kubelet[2515]: I0512 23:42:33.699623 2515 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 23:42:33.699793 kubelet[2515]: I0512 23:42:33.699642 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 23:42:33.699915 kubelet[2515]: I0512 23:42:33.699847 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 23:42:33.762492 kubelet[2515]: E0512 23:42:33.762400 2515 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 23:42:33.805019 kubelet[2515]: I0512 23:42:33.804988 2515 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 12 23:42:33.813115 kubelet[2515]: I0512 23:42:33.813070 2515 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 12 23:42:33.813312 kubelet[2515]: I0512 23:42:33.813292 2515 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 12 23:42:33.846980 kubelet[2515]: I0512 23:42:33.846864 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 12 23:42:33.846980 kubelet[2515]: I0512 23:42:33.846910 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:33.846980 kubelet[2515]: I0512 23:42:33.846943 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:33.846980 kubelet[2515]: I0512 23:42:33.846964 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:33.847155 kubelet[2515]: I0512 23:42:33.846992 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:33.847155 kubelet[2515]: I0512 23:42:33.847009 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49f06b124d2ee413a7532fbe18d41b27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49f06b124d2ee413a7532fbe18d41b27\") " pod="kube-system/kube-apiserver-localhost" May 12 23:42:33.847155 kubelet[2515]: I0512 23:42:33.847050 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:33.847155 kubelet[2515]: I0512 23:42:33.847066 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:33.847155 kubelet[2515]: I0512 23:42:33.847083 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 12 23:42:34.061777 kubelet[2515]: E0512 23:42:34.061702 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.061777 kubelet[2515]: E0512 23:42:34.061703 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.062954 kubelet[2515]: E0512 23:42:34.062888 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.615577 kubelet[2515]: I0512 23:42:34.615223 2515 apiserver.go:52] "Watching apiserver" May 12 23:42:34.630539 kubelet[2515]: I0512 23:42:34.630472 2515 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 12 23:42:34.676572 kubelet[2515]: E0512 23:42:34.675996 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.677020 kubelet[2515]: E0512 23:42:34.677000 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.684114 kubelet[2515]: E0512 23:42:34.684089 2515 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 12 23:42:34.684564 kubelet[2515]: E0512 23:42:34.684544 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:34.706968 kubelet[2515]: I0512 23:42:34.706915 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.7068983859999998 podStartE2EDuration="3.706898386s" podCreationTimestamp="2025-05-12 23:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:34.700252621 +0000 UTC m=+1.143711830" watchObservedRunningTime="2025-05-12 23:42:34.706898386 +0000 UTC m=+1.150357595" May 12 23:42:34.715254 kubelet[2515]: I0512 23:42:34.714987 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.714974998 podStartE2EDuration="1.714974998s" podCreationTimestamp="2025-05-12 23:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:34.714902473 +0000 UTC m=+1.158361642" watchObservedRunningTime="2025-05-12 23:42:34.714974998 +0000 UTC m=+1.158434207" May 12 23:42:34.715254 kubelet[2515]: I0512 23:42:34.715166 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.715158129 podStartE2EDuration="1.715158129s" podCreationTimestamp="2025-05-12 23:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:34.707770959 +0000 UTC m=+1.151230168" watchObservedRunningTime="2025-05-12 23:42:34.715158129 +0000 UTC m=+1.158617378" May 12 23:42:35.173349 sudo[1621]: pam_unix(sudo:session): session closed for user root May 12 23:42:35.175668 sshd[1620]: Connection closed by 10.0.0.1 port 32846 May 12 23:42:35.175135 sshd-session[1618]: pam_unix(sshd:session): session closed for user core May 12 23:42:35.178559 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:32846.service: Deactivated successfully. May 12 23:42:35.180534 systemd[1]: session-7.scope: Deactivated successfully. May 12 23:42:35.180967 systemd[1]: session-7.scope: Consumed 6.164s CPU time, 157.3M memory peak, 0B memory swap peak. May 12 23:42:35.182472 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 12 23:42:35.183295 systemd-logind[1455]: Removed session 7. May 12 23:42:35.677039 kubelet[2515]: E0512 23:42:35.676934 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:35.677039 kubelet[2515]: E0512 23:42:35.676972 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:35.677039 kubelet[2515]: E0512 23:42:35.677024 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:39.444433 kubelet[2515]: I0512 23:42:39.444400 2515 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 23:42:39.444826 containerd[1477]: time="2025-05-12T23:42:39.444726106Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 23:42:39.445742 kubelet[2515]: I0512 23:42:39.445167 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 23:42:40.060871 systemd[1]: Created slice kubepods-besteffort-pod447843bf_bd15_42a1_a0a5_eeccdd3878bc.slice - libcontainer container kubepods-besteffort-pod447843bf_bd15_42a1_a0a5_eeccdd3878bc.slice. May 12 23:42:40.084593 systemd[1]: Created slice kubepods-burstable-poda97d7643_76b4_4238_9319_b576717ece1b.slice - libcontainer container kubepods-burstable-poda97d7643_76b4_4238_9319_b576717ece1b.slice. May 12 23:42:40.088998 kubelet[2515]: I0512 23:42:40.088941 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a97d7643-76b4-4238-9319-b576717ece1b-run\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.088998 kubelet[2515]: I0512 23:42:40.088986 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a97d7643-76b4-4238-9319-b576717ece1b-cni-plugin\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.088998 kubelet[2515]: I0512 23:42:40.089007 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a97d7643-76b4-4238-9319-b576717ece1b-flannel-cfg\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.089154 kubelet[2515]: I0512 23:42:40.089024 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqv8\" (UniqueName: \"kubernetes.io/projected/a97d7643-76b4-4238-9319-b576717ece1b-kube-api-access-5fqv8\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.089154 kubelet[2515]: I0512 23:42:40.089041 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/447843bf-bd15-42a1-a0a5-eeccdd3878bc-xtables-lock\") pod \"kube-proxy-x6t9d\" (UID: \"447843bf-bd15-42a1-a0a5-eeccdd3878bc\") " pod="kube-system/kube-proxy-x6t9d" May 12 23:42:40.089154 kubelet[2515]: I0512 23:42:40.089060 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a97d7643-76b4-4238-9319-b576717ece1b-xtables-lock\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.089154 kubelet[2515]: I0512 23:42:40.089075 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/447843bf-bd15-42a1-a0a5-eeccdd3878bc-kube-proxy\") pod \"kube-proxy-x6t9d\" (UID: \"447843bf-bd15-42a1-a0a5-eeccdd3878bc\") " pod="kube-system/kube-proxy-x6t9d" May 12 23:42:40.089154 kubelet[2515]: I0512 23:42:40.089090 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xvw\" (UniqueName: \"kubernetes.io/projected/447843bf-bd15-42a1-a0a5-eeccdd3878bc-kube-api-access-v8xvw\") pod \"kube-proxy-x6t9d\" (UID: \"447843bf-bd15-42a1-a0a5-eeccdd3878bc\") " pod="kube-system/kube-proxy-x6t9d" May 12 23:42:40.089358 kubelet[2515]: I0512 23:42:40.089106 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a97d7643-76b4-4238-9319-b576717ece1b-cni\") pod \"kube-flannel-ds-txvh6\" (UID: \"a97d7643-76b4-4238-9319-b576717ece1b\") " pod="kube-flannel/kube-flannel-ds-txvh6" May 12 23:42:40.089358 kubelet[2515]: I0512 23:42:40.089120 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/447843bf-bd15-42a1-a0a5-eeccdd3878bc-lib-modules\") pod \"kube-proxy-x6t9d\" (UID: \"447843bf-bd15-42a1-a0a5-eeccdd3878bc\") " pod="kube-system/kube-proxy-x6t9d" May 12 23:42:40.197576 kubelet[2515]: E0512 23:42:40.197523 2515 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 12 23:42:40.197576 kubelet[2515]: E0512 23:42:40.197566 2515 projected.go:194] Error preparing data for projected volume kube-api-access-5fqv8 for pod kube-flannel/kube-flannel-ds-txvh6: configmap "kube-root-ca.crt" not found May 12 23:42:40.197865 kubelet[2515]: E0512 23:42:40.197625 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a97d7643-76b4-4238-9319-b576717ece1b-kube-api-access-5fqv8 podName:a97d7643-76b4-4238-9319-b576717ece1b nodeName:}" failed. No retries permitted until 2025-05-12 23:42:40.697604172 +0000 UTC m=+7.141063381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5fqv8" (UniqueName: "kubernetes.io/projected/a97d7643-76b4-4238-9319-b576717ece1b-kube-api-access-5fqv8") pod "kube-flannel-ds-txvh6" (UID: "a97d7643-76b4-4238-9319-b576717ece1b") : configmap "kube-root-ca.crt" not found May 12 23:42:40.197865 kubelet[2515]: E0512 23:42:40.197538 2515 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 12 23:42:40.197865 kubelet[2515]: E0512 23:42:40.197679 2515 projected.go:194] Error preparing data for projected volume kube-api-access-v8xvw for pod kube-system/kube-proxy-x6t9d: configmap "kube-root-ca.crt" not found May 12 23:42:40.198055 kubelet[2515]: E0512 23:42:40.198033 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/447843bf-bd15-42a1-a0a5-eeccdd3878bc-kube-api-access-v8xvw podName:447843bf-bd15-42a1-a0a5-eeccdd3878bc nodeName:}" failed. No retries permitted until 2025-05-12 23:42:40.697711897 +0000 UTC m=+7.141171106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8xvw" (UniqueName: "kubernetes.io/projected/447843bf-bd15-42a1-a0a5-eeccdd3878bc-kube-api-access-v8xvw") pod "kube-proxy-x6t9d" (UID: "447843bf-bd15-42a1-a0a5-eeccdd3878bc") : configmap "kube-root-ca.crt" not found May 12 23:42:40.981917 kubelet[2515]: E0512 23:42:40.981877 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:40.983237 containerd[1477]: time="2025-05-12T23:42:40.983203709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6t9d,Uid:447843bf-bd15-42a1-a0a5-eeccdd3878bc,Namespace:kube-system,Attempt:0,}" May 12 23:42:40.990425 kubelet[2515]: E0512 23:42:40.990389 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:40.990949 containerd[1477]: time="2025-05-12T23:42:40.990906611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-txvh6,Uid:a97d7643-76b4-4238-9319-b576717ece1b,Namespace:kube-flannel,Attempt:0,}" May 12 23:42:41.033188 containerd[1477]: time="2025-05-12T23:42:41.033016728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:41.033394 containerd[1477]: time="2025-05-12T23:42:41.033228897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:41.033394 containerd[1477]: time="2025-05-12T23:42:41.033266019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:41.033444 containerd[1477]: time="2025-05-12T23:42:41.033415305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:41.041606 containerd[1477]: time="2025-05-12T23:42:41.041447364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:41.041606 containerd[1477]: time="2025-05-12T23:42:41.041536488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:41.041935 containerd[1477]: time="2025-05-12T23:42:41.041769378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:41.042486 containerd[1477]: time="2025-05-12T23:42:41.042420925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:41.061516 systemd[1]: Started cri-containerd-8f059400e9028cba51fc1ad17d2069e055adec9712ecec1ad7f02f833db6ab07.scope - libcontainer container 8f059400e9028cba51fc1ad17d2069e055adec9712ecec1ad7f02f833db6ab07. May 12 23:42:41.064939 systemd[1]: Started cri-containerd-905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688.scope - libcontainer container 905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688. May 12 23:42:41.074323 update_engine[1458]: I20250512 23:42:41.071853 1458 update_attempter.cc:509] Updating boot flags... May 12 23:42:41.102174 containerd[1477]: time="2025-05-12T23:42:41.102094123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6t9d,Uid:447843bf-bd15-42a1-a0a5-eeccdd3878bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f059400e9028cba51fc1ad17d2069e055adec9712ecec1ad7f02f833db6ab07\"" May 12 23:42:41.105424 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2669) May 12 23:42:41.105530 kubelet[2515]: E0512 23:42:41.105483 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:41.110349 containerd[1477]: time="2025-05-12T23:42:41.109621880Z" level=info msg="CreateContainer within sandbox \"8f059400e9028cba51fc1ad17d2069e055adec9712ecec1ad7f02f833db6ab07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 23:42:41.118694 containerd[1477]: time="2025-05-12T23:42:41.118030075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-txvh6,Uid:a97d7643-76b4-4238-9319-b576717ece1b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\"" May 12 23:42:41.120513 kubelet[2515]: E0512 23:42:41.120481 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:41.122740 containerd[1477]: time="2025-05-12T23:42:41.122539105Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 12 23:42:41.145390 containerd[1477]: time="2025-05-12T23:42:41.145344867Z" level=info msg="CreateContainer within sandbox \"8f059400e9028cba51fc1ad17d2069e055adec9712ecec1ad7f02f833db6ab07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"819497d8359d8d69b86071d15d30310e115e7e31e345d398816d383a3d8e95ca\"" May 12 23:42:41.147389 containerd[1477]: time="2025-05-12T23:42:41.147215266Z" level=info msg="StartContainer for \"819497d8359d8d69b86071d15d30310e115e7e31e345d398816d383a3d8e95ca\"" May 12 23:42:41.155387 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2661) May 12 23:42:41.196504 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2661) May 12 23:42:41.195072 systemd[1]: Started cri-containerd-819497d8359d8d69b86071d15d30310e115e7e31e345d398816d383a3d8e95ca.scope - libcontainer container 819497d8359d8d69b86071d15d30310e115e7e31e345d398816d383a3d8e95ca. May 12 23:42:41.245107 containerd[1477]: time="2025-05-12T23:42:41.244980751Z" level=info msg="StartContainer for \"819497d8359d8d69b86071d15d30310e115e7e31e345d398816d383a3d8e95ca\" returns successfully" May 12 23:42:41.272292 kubelet[2515]: E0512 23:42:41.269427 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:41.687927 kubelet[2515]: E0512 23:42:41.687885 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:41.688638 kubelet[2515]: E0512 23:42:41.688619 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:41.720531 kubelet[2515]: I0512 23:42:41.720300 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x6t9d" podStartSLOduration=1.7202597229999999 podStartE2EDuration="1.720259723s" podCreationTimestamp="2025-05-12 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:41.711297584 +0000 UTC m=+8.154756793" watchObservedRunningTime="2025-05-12 23:42:41.720259723 +0000 UTC m=+8.163718932" May 12 23:42:42.689428 kubelet[2515]: E0512 23:42:42.689398 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:42.809232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372856513.mount: Deactivated successfully. May 12 23:42:42.840615 containerd[1477]: time="2025-05-12T23:42:42.840561111Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:42.841148 containerd[1477]: time="2025-05-12T23:42:42.841093333Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 12 23:42:42.842166 containerd[1477]: time="2025-05-12T23:42:42.842128014Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:42.844333 containerd[1477]: time="2025-05-12T23:42:42.844259180Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:42.846075 containerd[1477]: time="2025-05-12T23:42:42.845605914Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.723030167s" May 12 23:42:42.846075 containerd[1477]: time="2025-05-12T23:42:42.845642315Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 12 23:42:42.847996 containerd[1477]: time="2025-05-12T23:42:42.847961968Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 12 23:42:42.859514 containerd[1477]: time="2025-05-12T23:42:42.859371306Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851\"" May 12 23:42:42.859983 containerd[1477]: time="2025-05-12T23:42:42.859955250Z" level=info msg="StartContainer for \"373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851\"" May 12 23:42:42.885464 systemd[1]: Started cri-containerd-373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851.scope - libcontainer container 373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851. May 12 23:42:42.910878 containerd[1477]: time="2025-05-12T23:42:42.910725368Z" level=info msg="StartContainer for \"373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851\" returns successfully" May 12 23:42:42.914996 systemd[1]: cri-containerd-373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851.scope: Deactivated successfully. May 12 23:42:42.955476 containerd[1477]: time="2025-05-12T23:42:42.955309558Z" level=info msg="shim disconnected" id=373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851 namespace=k8s.io May 12 23:42:42.955476 containerd[1477]: time="2025-05-12T23:42:42.955367400Z" level=warning msg="cleaning up after shim disconnected" id=373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851 namespace=k8s.io May 12 23:42:42.955476 containerd[1477]: time="2025-05-12T23:42:42.955375800Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:42:43.692481 kubelet[2515]: E0512 23:42:43.692127 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:43.693658 containerd[1477]: time="2025-05-12T23:42:43.693580307Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 12 23:42:43.809239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-373156ee4ef18be312490dd462df50c66ddba6539686ca4ea535c53b1923d851-rootfs.mount: Deactivated successfully. May 12 23:42:44.192933 kubelet[2515]: E0512 23:42:44.192866 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:45.017376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284535610.mount: Deactivated successfully. May 12 23:42:45.060238 kubelet[2515]: E0512 23:42:45.059910 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:45.583165 containerd[1477]: time="2025-05-12T23:42:45.583103474Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:45.583647 containerd[1477]: time="2025-05-12T23:42:45.583593971Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 12 23:42:45.584654 containerd[1477]: time="2025-05-12T23:42:45.584601326Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:45.589031 containerd[1477]: time="2025-05-12T23:42:45.587664513Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 23:42:45.589031 containerd[1477]: time="2025-05-12T23:42:45.588903516Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.895280648s" May 12 23:42:45.589031 containerd[1477]: time="2025-05-12T23:42:45.588933317Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 12 23:42:45.591416 containerd[1477]: time="2025-05-12T23:42:45.591381642Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 12 23:42:45.602477 containerd[1477]: time="2025-05-12T23:42:45.602426586Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5\"" May 12 23:42:45.603063 containerd[1477]: time="2025-05-12T23:42:45.603029487Z" level=info msg="StartContainer for \"5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5\"" May 12 23:42:45.637504 systemd[1]: Started cri-containerd-5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5.scope - libcontainer container 5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5. May 12 23:42:45.665492 containerd[1477]: time="2025-05-12T23:42:45.665448655Z" level=info msg="StartContainer for \"5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5\" returns successfully" May 12 23:42:45.668523 systemd[1]: cri-containerd-5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5.scope: Deactivated successfully. May 12 23:42:45.697812 kubelet[2515]: E0512 23:42:45.697375 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:45.727156 kubelet[2515]: I0512 23:42:45.727096 2515 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 12 23:42:45.768568 systemd[1]: Created slice kubepods-burstable-pod599e8a3c_c681_4bd2_9c22_45b3d5eb952b.slice - libcontainer container kubepods-burstable-pod599e8a3c_c681_4bd2_9c22_45b3d5eb952b.slice. May 12 23:42:45.788107 systemd[1]: Created slice kubepods-burstable-pod81635fea_4a31_4c97_b72a_750a5def910f.slice - libcontainer container kubepods-burstable-pod81635fea_4a31_4c97_b72a_750a5def910f.slice. May 12 23:42:45.808061 containerd[1477]: time="2025-05-12T23:42:45.808000008Z" level=info msg="shim disconnected" id=5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5 namespace=k8s.io May 12 23:42:45.808061 containerd[1477]: time="2025-05-12T23:42:45.808056610Z" level=warning msg="cleaning up after shim disconnected" id=5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5 namespace=k8s.io May 12 23:42:45.808061 containerd[1477]: time="2025-05-12T23:42:45.808065570Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 23:42:45.828570 kubelet[2515]: I0512 23:42:45.828520 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/599e8a3c-c681-4bd2-9c22-45b3d5eb952b-config-volume\") pod \"coredns-6f6b679f8f-6dfk8\" (UID: \"599e8a3c-c681-4bd2-9c22-45b3d5eb952b\") " pod="kube-system/coredns-6f6b679f8f-6dfk8" May 12 23:42:45.828699 kubelet[2515]: I0512 23:42:45.828569 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81635fea-4a31-4c97-b72a-750a5def910f-config-volume\") pod \"coredns-6f6b679f8f-jdznp\" (UID: \"81635fea-4a31-4c97-b72a-750a5def910f\") " pod="kube-system/coredns-6f6b679f8f-jdznp" May 12 23:42:45.828699 kubelet[2515]: I0512 23:42:45.828630 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6w4\" (UniqueName: \"kubernetes.io/projected/599e8a3c-c681-4bd2-9c22-45b3d5eb952b-kube-api-access-mz6w4\") pod \"coredns-6f6b679f8f-6dfk8\" (UID: \"599e8a3c-c681-4bd2-9c22-45b3d5eb952b\") " pod="kube-system/coredns-6f6b679f8f-6dfk8" May 12 23:42:45.828699 kubelet[2515]: I0512 23:42:45.828682 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9vg\" (UniqueName: \"kubernetes.io/projected/81635fea-4a31-4c97-b72a-750a5def910f-kube-api-access-hv9vg\") pod \"coredns-6f6b679f8f-jdznp\" (UID: \"81635fea-4a31-4c97-b72a-750a5def910f\") " pod="kube-system/coredns-6f6b679f8f-jdznp" May 12 23:42:45.922182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c304ea405d5d215b767b6bd9599d1c1f8e9577a57eb3cc4a3427c3ef74e18d5-rootfs.mount: Deactivated successfully. May 12 23:42:46.071441 kubelet[2515]: E0512 23:42:46.071406 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:46.072412 containerd[1477]: time="2025-05-12T23:42:46.072037109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6dfk8,Uid:599e8a3c-c681-4bd2-9c22-45b3d5eb952b,Namespace:kube-system,Attempt:0,}" May 12 23:42:46.092234 kubelet[2515]: E0512 23:42:46.092180 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:46.092892 containerd[1477]: time="2025-05-12T23:42:46.092692834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdznp,Uid:81635fea-4a31-4c97-b72a-750a5def910f,Namespace:kube-system,Attempt:0,}" May 12 23:42:46.182232 containerd[1477]: time="2025-05-12T23:42:46.181450138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdznp,Uid:81635fea-4a31-4c97-b72a-750a5def910f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c92d9d557a4e23ac29c08bdfb51d1fb66460279c19bcd9d6204d728e95aeff76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:42:46.183066 kubelet[2515]: E0512 23:42:46.181696 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92d9d557a4e23ac29c08bdfb51d1fb66460279c19bcd9d6204d728e95aeff76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:42:46.183066 kubelet[2515]: E0512 23:42:46.181760 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92d9d557a4e23ac29c08bdfb51d1fb66460279c19bcd9d6204d728e95aeff76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jdznp" May 12 23:42:46.186701 kubelet[2515]: E0512 23:42:46.185235 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92d9d557a4e23ac29c08bdfb51d1fb66460279c19bcd9d6204d728e95aeff76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jdznp" May 12 23:42:46.186701 kubelet[2515]: E0512 23:42:46.185333 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jdznp_kube-system(81635fea-4a31-4c97-b72a-750a5def910f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jdznp_kube-system(81635fea-4a31-4c97-b72a-750a5def910f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c92d9d557a4e23ac29c08bdfb51d1fb66460279c19bcd9d6204d728e95aeff76\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-jdznp" podUID="81635fea-4a31-4c97-b72a-750a5def910f" May 12 23:42:46.200332 containerd[1477]: time="2025-05-12T23:42:46.200249201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6dfk8,Uid:599e8a3c-c681-4bd2-9c22-45b3d5eb952b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:42:46.201009 kubelet[2515]: E0512 23:42:46.200643 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 12 23:42:46.201009 kubelet[2515]: E0512 23:42:46.200710 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-6dfk8" May 12 23:42:46.201009 kubelet[2515]: E0512 23:42:46.200736 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-6dfk8" May 12 23:42:46.201009 kubelet[2515]: E0512 23:42:46.200783 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6dfk8_kube-system(599e8a3c-c681-4bd2-9c22-45b3d5eb952b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6dfk8_kube-system(599e8a3c-c681-4bd2-9c22-45b3d5eb952b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-6dfk8" podUID="599e8a3c-c681-4bd2-9c22-45b3d5eb952b" May 12 23:42:46.700269 kubelet[2515]: E0512 23:42:46.700230 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:46.705073 containerd[1477]: time="2025-05-12T23:42:46.704930659Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 12 23:42:46.719191 containerd[1477]: time="2025-05-12T23:42:46.719130650Z" level=info msg="CreateContainer within sandbox \"905cd7cb0a2c4b15ddfb89ece1f450ed5e0bcfd60fdbd68607941c746f60a688\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"43049511329459fa7a790eec8fd3a92e9e8b0490f6d7be238b47753073b072f5\"" May 12 23:42:46.720594 containerd[1477]: time="2025-05-12T23:42:46.720557897Z" level=info msg="StartContainer for \"43049511329459fa7a790eec8fd3a92e9e8b0490f6d7be238b47753073b072f5\"" May 12 23:42:46.751363 systemd[1]: Started cri-containerd-43049511329459fa7a790eec8fd3a92e9e8b0490f6d7be238b47753073b072f5.scope - libcontainer container 43049511329459fa7a790eec8fd3a92e9e8b0490f6d7be238b47753073b072f5. May 12 23:42:46.789949 containerd[1477]: time="2025-05-12T23:42:46.789820754Z" level=info msg="StartContainer for \"43049511329459fa7a790eec8fd3a92e9e8b0490f6d7be238b47753073b072f5\" returns successfully" May 12 23:42:46.923004 systemd[1]: run-netns-cni\x2d87bf5f9c\x2d0243\x2d7595\x2da566\x2de4a8756c3477.mount: Deactivated successfully. May 12 23:42:46.923091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cb534015016f312503b5c436f834e7d38ab63eec267dd54925fdaffeb8fcd50-shm.mount: Deactivated successfully. May 12 23:42:47.704003 kubelet[2515]: E0512 23:42:47.703958 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:47.869061 systemd-networkd[1374]: flannel.1: Link UP May 12 23:42:47.869071 systemd-networkd[1374]: flannel.1: Gained carrier May 12 23:42:48.705341 kubelet[2515]: E0512 23:42:48.705308 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:48.958468 systemd-networkd[1374]: flannel.1: Gained IPv6LL May 12 23:42:57.655380 kubelet[2515]: E0512 23:42:57.655270 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:57.656427 containerd[1477]: time="2025-05-12T23:42:57.656024810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdznp,Uid:81635fea-4a31-4c97-b72a-750a5def910f,Namespace:kube-system,Attempt:0,}" May 12 23:42:57.682879 systemd-networkd[1374]: cni0: Link UP May 12 23:42:57.682884 systemd-networkd[1374]: cni0: Gained carrier May 12 23:42:57.684520 systemd-networkd[1374]: cni0: Lost carrier May 12 23:42:57.687659 systemd-networkd[1374]: veth5b5aa9b1: Link UP May 12 23:42:57.689674 kernel: cni0: port 1(veth5b5aa9b1) entered blocking state May 12 23:42:57.689741 kernel: cni0: port 1(veth5b5aa9b1) entered disabled state May 12 23:42:57.689762 kernel: veth5b5aa9b1: entered allmulticast mode May 12 23:42:57.691339 kernel: veth5b5aa9b1: entered promiscuous mode May 12 23:42:57.691404 kernel: cni0: port 1(veth5b5aa9b1) entered blocking state May 12 23:42:57.692750 kernel: cni0: port 1(veth5b5aa9b1) entered forwarding state May 12 23:42:57.693404 kernel: cni0: port 1(veth5b5aa9b1) entered disabled state May 12 23:42:57.700135 systemd-networkd[1374]: veth5b5aa9b1: Gained carrier May 12 23:42:57.700454 kernel: cni0: port 1(veth5b5aa9b1) entered blocking state May 12 23:42:57.700495 kernel: cni0: port 1(veth5b5aa9b1) entered forwarding state May 12 23:42:57.700689 systemd-networkd[1374]: cni0: Gained carrier May 12 23:42:57.703154 containerd[1477]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} May 12 23:42:57.703154 containerd[1477]: delegateAdd: netconf sent to delegate plugin: May 12 23:42:57.719732 containerd[1477]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-12T23:42:57.719652234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:42:57.719992 containerd[1477]: time="2025-05-12T23:42:57.719716075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:42:57.719992 containerd[1477]: time="2025-05-12T23:42:57.719727636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:57.719992 containerd[1477]: time="2025-05-12T23:42:57.719802877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:42:57.739436 systemd[1]: Started cri-containerd-887a535bea50311583e9b39d8755de4a23c47a8fed083e3887402a320108fe9d.scope - libcontainer container 887a535bea50311583e9b39d8755de4a23c47a8fed083e3887402a320108fe9d. May 12 23:42:57.750510 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:42:57.767417 containerd[1477]: time="2025-05-12T23:42:57.767370842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jdznp,Uid:81635fea-4a31-4c97-b72a-750a5def910f,Namespace:kube-system,Attempt:0,} returns sandbox id \"887a535bea50311583e9b39d8755de4a23c47a8fed083e3887402a320108fe9d\"" May 12 23:42:57.768228 kubelet[2515]: E0512 23:42:57.768190 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:57.772113 containerd[1477]: time="2025-05-12T23:42:57.771977819Z" level=info msg="CreateContainer within sandbox \"887a535bea50311583e9b39d8755de4a23c47a8fed083e3887402a320108fe9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:42:57.787858 containerd[1477]: time="2025-05-12T23:42:57.787814273Z" level=info msg="CreateContainer within sandbox \"887a535bea50311583e9b39d8755de4a23c47a8fed083e3887402a320108fe9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55544f9d0678e056a1e1db17bb388316b8404c036b00036cddec5419ed9de8e8\"" May 12 23:42:57.788576 containerd[1477]: time="2025-05-12T23:42:57.788544289Z" level=info msg="StartContainer for \"55544f9d0678e056a1e1db17bb388316b8404c036b00036cddec5419ed9de8e8\"" May 12 23:42:57.824479 systemd[1]: Started cri-containerd-55544f9d0678e056a1e1db17bb388316b8404c036b00036cddec5419ed9de8e8.scope - libcontainer container 55544f9d0678e056a1e1db17bb388316b8404c036b00036cddec5419ed9de8e8. May 12 23:42:57.851346 containerd[1477]: time="2025-05-12T23:42:57.851123650Z" level=info msg="StartContainer for \"55544f9d0678e056a1e1db17bb388316b8404c036b00036cddec5419ed9de8e8\" returns successfully" May 12 23:42:58.752503 kubelet[2515]: E0512 23:42:58.751642 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:42:58.768519 kubelet[2515]: I0512 23:42:58.767482 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-txvh6" podStartSLOduration=14.29872074 podStartE2EDuration="18.767463684s" podCreationTimestamp="2025-05-12 23:42:40 +0000 UTC" firstStartedPulling="2025-05-12 23:42:41.121124965 +0000 UTC m=+7.564584174" lastFinishedPulling="2025-05-12 23:42:45.589867909 +0000 UTC m=+12.033327118" observedRunningTime="2025-05-12 23:42:47.74277974 +0000 UTC m=+14.186238909" watchObservedRunningTime="2025-05-12 23:42:58.767463684 +0000 UTC m=+25.210922853" May 12 23:42:58.768519 kubelet[2515]: I0512 23:42:58.767725 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jdznp" podStartSLOduration=18.767718969 podStartE2EDuration="18.767718969s" podCreationTimestamp="2025-05-12 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:42:58.765344681 +0000 UTC m=+25.208803930" watchObservedRunningTime="2025-05-12 23:42:58.767718969 +0000 UTC m=+25.211178178" May 12 23:42:58.789545 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:38854.service - OpenSSH per-connection server daemon (10.0.0.1:38854). May 12 23:42:58.857902 sshd[3338]: Accepted publickey for core from 10.0.0.1 port 38854 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:42:58.859517 sshd-session[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:42:58.863351 systemd-logind[1455]: New session 8 of user core. May 12 23:42:58.873463 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 23:42:59.012002 sshd[3343]: Connection closed by 10.0.0.1 port 38854 May 12 23:42:59.013599 sshd-session[3338]: pam_unix(sshd:session): session closed for user core May 12 23:42:59.017950 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:38854.service: Deactivated successfully. May 12 23:42:59.019768 systemd[1]: session-8.scope: Deactivated successfully. May 12 23:42:59.020309 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 12 23:42:59.021119 systemd-logind[1455]: Removed session 8. May 12 23:42:59.390450 systemd-networkd[1374]: veth5b5aa9b1: Gained IPv6LL May 12 23:42:59.582647 systemd-networkd[1374]: cni0: Gained IPv6LL May 12 23:42:59.754973 kubelet[2515]: E0512 23:42:59.753525 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:00.754803 kubelet[2515]: E0512 23:43:00.754740 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:01.655551 kubelet[2515]: E0512 23:43:01.655368 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:01.655899 containerd[1477]: time="2025-05-12T23:43:01.655749032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6dfk8,Uid:599e8a3c-c681-4bd2-9c22-45b3d5eb952b,Namespace:kube-system,Attempt:0,}" May 12 23:43:01.712416 kernel: cni0: port 2(vethe8a1f438) entered blocking state May 12 23:43:01.712579 kernel: cni0: port 2(vethe8a1f438) entered disabled state May 12 23:43:01.712600 kernel: vethe8a1f438: entered allmulticast mode May 12 23:43:01.712619 kernel: vethe8a1f438: entered promiscuous mode May 12 23:43:01.711171 systemd-networkd[1374]: vethe8a1f438: Link UP May 12 23:43:01.716423 kernel: cni0: port 2(vethe8a1f438) entered blocking state May 12 23:43:01.716471 kernel: cni0: port 2(vethe8a1f438) entered forwarding state May 12 23:43:01.716308 systemd-networkd[1374]: vethe8a1f438: Gained carrier May 12 23:43:01.718660 containerd[1477]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} May 12 23:43:01.718660 containerd[1477]: delegateAdd: netconf sent to delegate plugin: May 12 23:43:01.748117 containerd[1477]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-12T23:43:01.747959335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 12 23:43:01.748117 containerd[1477]: time="2025-05-12T23:43:01.748036576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 12 23:43:01.748117 containerd[1477]: time="2025-05-12T23:43:01.748048336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:43:01.748314 containerd[1477]: time="2025-05-12T23:43:01.748199219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 12 23:43:01.768447 systemd[1]: Started cri-containerd-57b117fdef7eb8dd808be993c01ab3722191e46699ba84396a7655e98bae6e91.scope - libcontainer container 57b117fdef7eb8dd808be993c01ab3722191e46699ba84396a7655e98bae6e91. May 12 23:43:01.780483 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 23:43:01.798119 containerd[1477]: time="2025-05-12T23:43:01.798082140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6dfk8,Uid:599e8a3c-c681-4bd2-9c22-45b3d5eb952b,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b117fdef7eb8dd808be993c01ab3722191e46699ba84396a7655e98bae6e91\"" May 12 23:43:01.798861 kubelet[2515]: E0512 23:43:01.798833 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:01.801893 containerd[1477]: time="2025-05-12T23:43:01.801766368Z" level=info msg="CreateContainer within sandbox \"57b117fdef7eb8dd808be993c01ab3722191e46699ba84396a7655e98bae6e91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 23:43:01.833189 containerd[1477]: time="2025-05-12T23:43:01.833136468Z" level=info msg="CreateContainer within sandbox \"57b117fdef7eb8dd808be993c01ab3722191e46699ba84396a7655e98bae6e91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77ee9270eb14bebe11d34d7fad8d402ff05a35a9658545876fab67060d2d9b37\"" May 12 23:43:01.834554 containerd[1477]: time="2025-05-12T23:43:01.833663357Z" level=info msg="StartContainer for \"77ee9270eb14bebe11d34d7fad8d402ff05a35a9658545876fab67060d2d9b37\"" May 12 23:43:01.873766 systemd[1]: Started cri-containerd-77ee9270eb14bebe11d34d7fad8d402ff05a35a9658545876fab67060d2d9b37.scope - libcontainer container 77ee9270eb14bebe11d34d7fad8d402ff05a35a9658545876fab67060d2d9b37. May 12 23:43:01.904663 containerd[1477]: time="2025-05-12T23:43:01.904613428Z" level=info msg="StartContainer for \"77ee9270eb14bebe11d34d7fad8d402ff05a35a9658545876fab67060d2d9b37\" returns successfully" May 12 23:43:02.669645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198027512.mount: Deactivated successfully. May 12 23:43:02.759207 kubelet[2515]: E0512 23:43:02.759167 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:02.773179 kubelet[2515]: I0512 23:43:02.772468 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6dfk8" podStartSLOduration=22.772450422 podStartE2EDuration="22.772450422s" podCreationTimestamp="2025-05-12 23:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 23:43:02.771818331 +0000 UTC m=+29.215277540" watchObservedRunningTime="2025-05-12 23:43:02.772450422 +0000 UTC m=+29.215909631" May 12 23:43:03.486731 systemd-networkd[1374]: vethe8a1f438: Gained IPv6LL May 12 23:43:03.763500 kubelet[2515]: E0512 23:43:03.760880 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:04.026137 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:33200.service - OpenSSH per-connection server daemon (10.0.0.1:33200). May 12 23:43:04.074392 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:04.075548 sshd-session[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:04.079629 systemd-logind[1455]: New session 9 of user core. May 12 23:43:04.087535 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 23:43:04.221130 sshd[3502]: Connection closed by 10.0.0.1 port 33200 May 12 23:43:04.221367 sshd-session[3500]: pam_unix(sshd:session): session closed for user core May 12 23:43:04.226452 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:33200.service: Deactivated successfully. May 12 23:43:04.228766 systemd[1]: session-9.scope: Deactivated successfully. May 12 23:43:04.230376 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 12 23:43:04.233758 systemd-logind[1455]: Removed session 9. May 12 23:43:04.762635 kubelet[2515]: E0512 23:43:04.762595 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 23:43:09.234669 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:33204.service - OpenSSH per-connection server daemon (10.0.0.1:33204). May 12 23:43:09.284311 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 33204 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:09.285447 sshd-session[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:09.289799 systemd-logind[1455]: New session 10 of user core. May 12 23:43:09.306496 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 23:43:09.433344 sshd[3540]: Connection closed by 10.0.0.1 port 33204 May 12 23:43:09.433752 sshd-session[3538]: pam_unix(sshd:session): session closed for user core May 12 23:43:09.445965 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:33204.service: Deactivated successfully. May 12 23:43:09.449462 systemd[1]: session-10.scope: Deactivated successfully. May 12 23:43:09.451187 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 12 23:43:09.465640 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:33214.service - OpenSSH per-connection server daemon (10.0.0.1:33214). May 12 23:43:09.466892 systemd-logind[1455]: Removed session 10. May 12 23:43:09.511931 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 33214 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:09.513371 sshd-session[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:09.517903 systemd-logind[1455]: New session 11 of user core. May 12 23:43:09.528484 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 23:43:09.676915 sshd[3555]: Connection closed by 10.0.0.1 port 33214 May 12 23:43:09.678207 sshd-session[3553]: pam_unix(sshd:session): session closed for user core May 12 23:43:09.685622 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:33214.service: Deactivated successfully. May 12 23:43:09.688885 systemd[1]: session-11.scope: Deactivated successfully. May 12 23:43:09.691731 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 12 23:43:09.694998 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:33216.service - OpenSSH per-connection server daemon (10.0.0.1:33216). May 12 23:43:09.702708 systemd-logind[1455]: Removed session 11. May 12 23:43:09.749382 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 33216 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:09.751087 sshd-session[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:09.758000 systemd-logind[1455]: New session 12 of user core. May 12 23:43:09.760491 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 23:43:09.877891 sshd[3567]: Connection closed by 10.0.0.1 port 33216 May 12 23:43:09.877120 sshd-session[3565]: pam_unix(sshd:session): session closed for user core May 12 23:43:09.881891 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:33216.service: Deactivated successfully. May 12 23:43:09.884893 systemd[1]: session-12.scope: Deactivated successfully. May 12 23:43:09.888377 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 12 23:43:09.890089 systemd-logind[1455]: Removed session 12. May 12 23:43:14.891933 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:44136.service - OpenSSH per-connection server daemon (10.0.0.1:44136). May 12 23:43:14.939454 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 44136 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:14.940913 sshd-session[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:14.947860 systemd-logind[1455]: New session 13 of user core. May 12 23:43:14.963463 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 23:43:15.075267 sshd[3606]: Connection closed by 10.0.0.1 port 44136 May 12 23:43:15.075737 sshd-session[3604]: pam_unix(sshd:session): session closed for user core May 12 23:43:15.087776 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:44136.service: Deactivated successfully. May 12 23:43:15.089394 systemd[1]: session-13.scope: Deactivated successfully. May 12 23:43:15.091184 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 12 23:43:15.092460 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:44144.service - OpenSSH per-connection server daemon (10.0.0.1:44144). May 12 23:43:15.093420 systemd-logind[1455]: Removed session 13. May 12 23:43:15.136458 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 44144 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:15.137717 sshd-session[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:15.146998 systemd-logind[1455]: New session 14 of user core. May 12 23:43:15.157510 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 23:43:15.368499 sshd[3620]: Connection closed by 10.0.0.1 port 44144 May 12 23:43:15.369384 sshd-session[3618]: pam_unix(sshd:session): session closed for user core May 12 23:43:15.379016 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:44144.service: Deactivated successfully. May 12 23:43:15.380583 systemd[1]: session-14.scope: Deactivated successfully. May 12 23:43:15.384039 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 12 23:43:15.395942 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:44146.service - OpenSSH per-connection server daemon (10.0.0.1:44146). May 12 23:43:15.397386 systemd-logind[1455]: Removed session 14. May 12 23:43:15.441585 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 44146 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:15.443024 sshd-session[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:15.448223 systemd-logind[1455]: New session 15 of user core. May 12 23:43:15.455515 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 23:43:16.755350 sshd[3632]: Connection closed by 10.0.0.1 port 44146 May 12 23:43:16.756250 sshd-session[3630]: pam_unix(sshd:session): session closed for user core May 12 23:43:16.765461 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:44146.service: Deactivated successfully. May 12 23:43:16.770851 systemd[1]: session-15.scope: Deactivated successfully. May 12 23:43:16.772514 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 12 23:43:16.780738 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:44156.service - OpenSSH per-connection server daemon (10.0.0.1:44156). May 12 23:43:16.781892 systemd-logind[1455]: Removed session 15. May 12 23:43:16.851336 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 44156 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:16.852407 sshd-session[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:16.862341 systemd-logind[1455]: New session 16 of user core. May 12 23:43:16.872599 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 23:43:17.113774 sshd[3653]: Connection closed by 10.0.0.1 port 44156 May 12 23:43:17.115579 sshd-session[3651]: pam_unix(sshd:session): session closed for user core May 12 23:43:17.120949 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:44156.service: Deactivated successfully. May 12 23:43:17.123616 systemd[1]: session-16.scope: Deactivated successfully. May 12 23:43:17.125736 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 12 23:43:17.135626 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:44172.service - OpenSSH per-connection server daemon (10.0.0.1:44172). May 12 23:43:17.139691 systemd-logind[1455]: Removed session 16. May 12 23:43:17.177974 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 44172 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:17.179788 sshd-session[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:17.184553 systemd-logind[1455]: New session 17 of user core. May 12 23:43:17.195478 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 23:43:17.311065 sshd[3665]: Connection closed by 10.0.0.1 port 44172 May 12 23:43:17.311458 sshd-session[3663]: pam_unix(sshd:session): session closed for user core May 12 23:43:17.314915 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:44172.service: Deactivated successfully. May 12 23:43:17.318050 systemd[1]: session-17.scope: Deactivated successfully. May 12 23:43:17.320496 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 12 23:43:17.321334 systemd-logind[1455]: Removed session 17. May 12 23:43:22.355373 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:44182.service - OpenSSH per-connection server daemon (10.0.0.1:44182). May 12 23:43:22.400910 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 44182 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:22.402474 sshd-session[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:22.406672 systemd-logind[1455]: New session 18 of user core. May 12 23:43:22.417427 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 23:43:22.529261 sshd[3703]: Connection closed by 10.0.0.1 port 44182 May 12 23:43:22.530485 sshd-session[3701]: pam_unix(sshd:session): session closed for user core May 12 23:43:22.534058 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:44182.service: Deactivated successfully. May 12 23:43:22.535979 systemd[1]: session-18.scope: Deactivated successfully. May 12 23:43:22.536827 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 12 23:43:22.537762 systemd-logind[1455]: Removed session 18. May 12 23:43:27.541142 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:58948.service - OpenSSH per-connection server daemon (10.0.0.1:58948). May 12 23:43:27.587629 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 58948 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:27.589057 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:27.593436 systemd-logind[1455]: New session 19 of user core. May 12 23:43:27.603523 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 23:43:27.726259 sshd[3738]: Connection closed by 10.0.0.1 port 58948 May 12 23:43:27.726618 sshd-session[3736]: pam_unix(sshd:session): session closed for user core May 12 23:43:27.729221 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:58948.service: Deactivated successfully. May 12 23:43:27.733597 systemd[1]: session-19.scope: Deactivated successfully. May 12 23:43:27.735834 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 12 23:43:27.736978 systemd-logind[1455]: Removed session 19. May 12 23:43:32.743149 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:36972.service - OpenSSH per-connection server daemon (10.0.0.1:36972). May 12 23:43:32.811678 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 36972 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 12 23:43:32.813406 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 23:43:32.818405 systemd-logind[1455]: New session 20 of user core. May 12 23:43:32.830500 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 23:43:32.967076 sshd[3775]: Connection closed by 10.0.0.1 port 36972 May 12 23:43:32.966085 sshd-session[3773]: pam_unix(sshd:session): session closed for user core May 12 23:43:32.970724 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:36972.service: Deactivated successfully. May 12 23:43:32.972622 systemd[1]: session-20.scope: Deactivated successfully. May 12 23:43:32.975656 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 12 23:43:32.976608 systemd-logind[1455]: Removed session 20.