May 13 00:25:35.738837 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:25:35.738855 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:25:35.738863 kernel: efi: EFI v2.70 by EDK II May 13 00:25:35.738869 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:25:35.738874 kernel: random: crng init done May 13 00:25:35.738879 kernel: ACPI: Early table checksum verification disabled May 13 00:25:35.738885 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:25:35.738892 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:25:35.738898 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738903 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738908 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738913 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738918 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738924 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738932 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738937 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738943 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:25:35.738949 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:25:35.738954 kernel: NUMA: Failed to initialise from firmware May 13 00:25:35.738960 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:25:35.738965 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] May 13 00:25:35.738971 kernel: Zone ranges: May 13 00:25:35.738976 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:25:35.738983 kernel: DMA32 empty May 13 00:25:35.738989 kernel: Normal empty May 13 00:25:35.738994 kernel: Movable zone start for each node May 13 00:25:35.738999 kernel: Early memory node ranges May 13 00:25:35.739005 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:25:35.739010 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:25:35.739016 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:25:35.739022 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:25:35.739027 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:25:35.739033 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:25:35.739038 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:25:35.739044 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:25:35.739050 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:25:35.739056 kernel: psci: probing for conduit method from ACPI. May 13 00:25:35.739061 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:25:35.739067 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:25:35.739072 kernel: psci: Trusted OS migration not required May 13 00:25:35.739080 kernel: psci: SMC Calling Convention v1.1 May 13 00:25:35.739086 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:25:35.739093 kernel: ACPI: SRAT not present May 13 00:25:35.739100 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:25:35.739106 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:25:35.739112 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:25:35.739118 kernel: Detected PIPT I-cache on CPU0 May 13 00:25:35.739124 kernel: CPU features: detected: GIC system register CPU interface May 13 00:25:35.739130 kernel: CPU features: detected: Hardware dirty bit management May 13 00:25:35.739136 kernel: CPU features: detected: Spectre-v4 May 13 00:25:35.739141 kernel: CPU features: detected: Spectre-BHB May 13 00:25:35.739149 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:25:35.739155 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:25:35.739161 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:25:35.739166 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:25:35.739172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:25:35.739178 kernel: Policy zone: DMA May 13 00:25:35.739185 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:25:35.739192 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:25:35.739230 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:25:35.739240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:25:35.739248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:25:35.739256 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114956K reserved, 0K cma-reserved) May 13 00:25:35.739263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:25:35.739269 kernel: trace event string verifier disabled May 13 00:25:35.739275 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:25:35.739282 kernel: rcu: RCU event tracing is enabled. May 13 00:25:35.739288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:25:35.741886 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:25:35.741907 kernel: Tracing variant of Tasks RCU enabled. May 13 00:25:35.741914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:25:35.741921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:25:35.741927 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:25:35.741937 kernel: GICv3: 256 SPIs implemented May 13 00:25:35.741943 kernel: GICv3: 0 Extended SPIs implemented May 13 00:25:35.741949 kernel: GICv3: Distributor has no Range Selector support May 13 00:25:35.741955 kernel: Root IRQ handler: gic_handle_irq May 13 00:25:35.741961 kernel: GICv3: 16 PPIs implemented May 13 00:25:35.741967 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:25:35.741973 kernel: ACPI: SRAT not present May 13 00:25:35.741979 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:25:35.741985 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:25:35.741991 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:25:35.741997 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:25:35.742003 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:25:35.742011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:25:35.742017 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:25:35.742023 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:25:35.742029 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:25:35.742035 kernel: arm-pv: using stolen time PV May 13 00:25:35.742042 kernel: Console: colour dummy device 80x25 May 13 00:25:35.742048 kernel: ACPI: Core revision 20210730 May 13 00:25:35.742054 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:25:35.742061 kernel: pid_max: default: 32768 minimum: 301 May 13 00:25:35.742067 kernel: LSM: Security Framework initializing May 13 00:25:35.742074 kernel: SELinux: Initializing. May 13 00:25:35.742081 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:25:35.742087 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:25:35.742093 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:25:35.742099 kernel: rcu: Hierarchical SRCU implementation. May 13 00:25:35.742105 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:25:35.742111 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:25:35.742118 kernel: Remapping and enabling EFI services. May 13 00:25:35.742124 kernel: smp: Bringing up secondary CPUs ... May 13 00:25:35.742131 kernel: Detected PIPT I-cache on CPU1 May 13 00:25:35.742137 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:25:35.742144 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:25:35.742150 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:25:35.742156 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:25:35.742163 kernel: Detected PIPT I-cache on CPU2 May 13 00:25:35.742169 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:25:35.742175 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:25:35.742181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:25:35.742187 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:25:35.742195 kernel: Detected PIPT I-cache on CPU3 May 13 00:25:35.742201 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:25:35.742207 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:25:35.742213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:25:35.742224 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:25:35.742231 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:25:35.742238 kernel: SMP: Total of 4 processors activated. May 13 00:25:35.742244 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:25:35.742250 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:25:35.742257 kernel: CPU features: detected: Common not Private translations May 13 00:25:35.742263 kernel: CPU features: detected: CRC32 instructions May 13 00:25:35.742270 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:25:35.742277 kernel: CPU features: detected: LSE atomic instructions May 13 00:25:35.742284 kernel: CPU features: detected: Privileged Access Never May 13 00:25:35.742291 kernel: CPU features: detected: RAS Extension Support May 13 00:25:35.742297 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:25:35.742304 kernel: CPU: All CPU(s) started at EL1 May 13 00:25:35.742312 kernel: alternatives: patching kernel code May 13 00:25:35.742318 kernel: devtmpfs: initialized May 13 00:25:35.742324 kernel: KASLR enabled May 13 00:25:35.742331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:25:35.742338 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:25:35.742351 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:25:35.742358 kernel: SMBIOS 3.0.0 present. May 13 00:25:35.742364 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:25:35.742371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:25:35.742379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:25:35.742386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:25:35.742392 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:25:35.742399 kernel: audit: initializing netlink subsys (disabled) May 13 00:25:35.742406 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 13 00:25:35.742412 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:25:35.742419 kernel: cpuidle: using governor menu May 13 00:25:35.742426 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:25:35.742432 kernel: ASID allocator initialised with 32768 entries May 13 00:25:35.742440 kernel: ACPI: bus type PCI registered May 13 00:25:35.742447 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:25:35.742454 kernel: Serial: AMBA PL011 UART driver May 13 00:25:35.742460 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:25:35.742467 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:25:35.742473 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:25:35.742480 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:25:35.742486 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:25:35.742493 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:25:35.742501 kernel: ACPI: Added _OSI(Module Device) May 13 00:25:35.742508 kernel: ACPI: Added _OSI(Processor Device) May 13 00:25:35.742515 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:25:35.742521 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:25:35.742539 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:25:35.742546 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:25:35.742553 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:25:35.742560 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:25:35.742566 kernel: ACPI: Interpreter enabled May 13 00:25:35.742575 kernel: ACPI: Using GIC for interrupt routing May 13 00:25:35.742581 kernel: ACPI: MCFG table detected, 1 entries May 13 00:25:35.742588 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:25:35.742594 kernel: printk: console [ttyAMA0] enabled May 13 00:25:35.742601 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:25:35.742728 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:25:35.742792 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:25:35.742850 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:25:35.742909 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:25:35.742966 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:25:35.742975 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:25:35.742982 kernel: PCI host bridge to bus 0000:00 May 13 00:25:35.743045 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:25:35.743098 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:25:35.743151 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:25:35.743204 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:25:35.743278 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:25:35.743360 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:25:35.743427 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:25:35.743486 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:25:35.743587 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:25:35.743663 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:25:35.743724 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:25:35.743782 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:25:35.743834 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:25:35.743885 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:25:35.743936 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:25:35.743945 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:25:35.743952 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:25:35.743960 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:25:35.743967 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:25:35.743973 kernel: iommu: Default domain type: Translated May 13 00:25:35.743980 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:25:35.743986 kernel: vgaarb: loaded May 13 00:25:35.743992 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:25:35.743999 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:25:35.744006 kernel: PTP clock support registered May 13 00:25:35.744012 kernel: Registered efivars operations May 13 00:25:35.744020 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:25:35.744026 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:25:35.744033 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:25:35.744039 kernel: pnp: PnP ACPI init May 13 00:25:35.744100 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:25:35.744110 kernel: pnp: PnP ACPI: found 1 devices May 13 00:25:35.744116 kernel: NET: Registered PF_INET protocol family May 13 00:25:35.744123 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:25:35.744131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:25:35.744138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:25:35.744145 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:25:35.744151 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:25:35.744158 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:25:35.744164 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:25:35.744171 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:25:35.744177 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:25:35.744184 kernel: PCI: CLS 0 bytes, default 64 May 13 00:25:35.744192 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:25:35.744198 kernel: kvm [1]: HYP mode not available May 13 00:25:35.744205 kernel: Initialise system trusted keyrings May 13 00:25:35.744211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:25:35.744218 kernel: Key type asymmetric registered May 13 00:25:35.744224 kernel: Asymmetric key parser 'x509' registered May 13 00:25:35.744230 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:25:35.744237 kernel: io scheduler mq-deadline registered May 13 00:25:35.744243 kernel: io scheduler kyber registered May 13 00:25:35.744251 kernel: io scheduler bfq registered May 13 00:25:35.744257 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:25:35.744264 kernel: ACPI: button: Power Button [PWRB] May 13 00:25:35.744271 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:25:35.744327 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:25:35.744336 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:25:35.744350 kernel: thunder_xcv, ver 1.0 May 13 00:25:35.744357 kernel: thunder_bgx, ver 1.0 May 13 00:25:35.744363 kernel: nicpf, ver 1.0 May 13 00:25:35.744372 kernel: nicvf, ver 1.0 May 13 00:25:35.744485 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:25:35.744561 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:25:35 UTC (1747095935) May 13 00:25:35.744571 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:25:35.744578 kernel: NET: Registered PF_INET6 protocol family May 13 00:25:35.744584 kernel: Segment Routing with IPv6 May 13 00:25:35.744591 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:25:35.744598 kernel: NET: Registered PF_PACKET protocol family May 13 00:25:35.744607 kernel: Key type dns_resolver registered May 13 00:25:35.744613 kernel: registered taskstats version 1 May 13 00:25:35.744620 kernel: Loading compiled-in X.509 certificates May 13 00:25:35.744626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:25:35.744633 kernel: Key type .fscrypt registered May 13 00:25:35.744639 kernel: Key type fscrypt-provisioning registered May 13 00:25:35.744646 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:25:35.744652 kernel: ima: Allocated hash algorithm: sha1 May 13 00:25:35.744659 kernel: ima: No architecture policies found May 13 00:25:35.744666 kernel: clk: Disabling unused clocks May 13 00:25:35.744673 kernel: Freeing unused kernel memory: 36480K May 13 00:25:35.744679 kernel: Run /init as init process May 13 00:25:35.744686 kernel: with arguments: May 13 00:25:35.744692 kernel: /init May 13 00:25:35.744698 kernel: with environment: May 13 00:25:35.744705 kernel: HOME=/ May 13 00:25:35.744711 kernel: TERM=linux May 13 00:25:35.744717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:25:35.744727 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:25:35.744735 systemd[1]: Detected virtualization kvm. May 13 00:25:35.744742 systemd[1]: Detected architecture arm64. May 13 00:25:35.744749 systemd[1]: Running in initrd. May 13 00:25:35.744756 systemd[1]: No hostname configured, using default hostname. May 13 00:25:35.744763 systemd[1]: Hostname set to . May 13 00:25:35.744770 systemd[1]: Initializing machine ID from VM UUID. May 13 00:25:35.744778 systemd[1]: Queued start job for default target initrd.target. May 13 00:25:35.744785 systemd[1]: Started systemd-ask-password-console.path. May 13 00:25:35.744792 systemd[1]: Reached target cryptsetup.target. May 13 00:25:35.744799 systemd[1]: Reached target paths.target. May 13 00:25:35.744806 systemd[1]: Reached target slices.target. May 13 00:25:35.744813 systemd[1]: Reached target swap.target. May 13 00:25:35.744820 systemd[1]: Reached target timers.target. May 13 00:25:35.744828 systemd[1]: Listening on iscsid.socket. May 13 00:25:35.744836 systemd[1]: Listening on iscsiuio.socket. May 13 00:25:35.744843 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:25:35.744850 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:25:35.744857 systemd[1]: Listening on systemd-journald.socket. May 13 00:25:35.744864 systemd[1]: Listening on systemd-networkd.socket. May 13 00:25:35.744871 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:25:35.744878 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:25:35.744885 systemd[1]: Reached target sockets.target. May 13 00:25:35.744894 systemd[1]: Starting kmod-static-nodes.service... May 13 00:25:35.744900 systemd[1]: Finished network-cleanup.service. May 13 00:25:35.744907 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:25:35.744914 systemd[1]: Starting systemd-journald.service... May 13 00:25:35.744921 systemd[1]: Starting systemd-modules-load.service... May 13 00:25:35.744928 systemd[1]: Starting systemd-resolved.service... May 13 00:25:35.744935 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:25:35.744942 systemd[1]: Finished kmod-static-nodes.service. May 13 00:25:35.744949 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:25:35.744958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:25:35.744965 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:25:35.744973 kernel: audit: type=1130 audit(1747095935.738:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.744980 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:25:35.744987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:25:35.744997 systemd-journald[289]: Journal started May 13 00:25:35.745037 systemd-journald[289]: Runtime Journal (/run/log/journal/65e679f3d89542d289a3cc00b67332ab) is 6.0M, max 48.7M, 42.6M free. May 13 00:25:35.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.735415 systemd-modules-load[290]: Inserted module 'overlay' May 13 00:25:35.749321 systemd[1]: Started systemd-journald.service. May 13 00:25:35.749340 kernel: audit: type=1130 audit(1747095935.745:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.753531 kernel: audit: type=1130 audit(1747095935.749:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.758897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:25:35.761760 systemd-modules-load[290]: Inserted module 'br_netfilter' May 13 00:25:35.762641 kernel: Bridge firewalling registered May 13 00:25:35.764012 systemd-resolved[291]: Positive Trust Anchors: May 13 00:25:35.764028 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:25:35.764056 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:25:35.766430 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:25:35.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.768143 systemd-resolved[291]: Defaulting to hostname 'linux'. May 13 00:25:35.776784 kernel: audit: type=1130 audit(1747095935.771:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.776803 kernel: SCSI subsystem initialized May 13 00:25:35.775011 systemd[1]: Started systemd-resolved.service. May 13 00:25:35.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.776115 systemd[1]: Reached target nss-lookup.target. May 13 00:25:35.781087 kernel: audit: type=1130 audit(1747095935.775:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.780955 systemd[1]: Starting dracut-cmdline.service... May 13 00:25:35.784564 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:25:35.784596 kernel: device-mapper: uevent: version 1.0.3 May 13 00:25:35.784606 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:25:35.787875 systemd-modules-load[290]: Inserted module 'dm_multipath' May 13 00:25:35.788644 systemd[1]: Finished systemd-modules-load.service. May 13 00:25:35.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.793325 dracut-cmdline[310]: dracut-dracut-053 May 13 00:25:35.794125 kernel: audit: type=1130 audit(1747095935.789:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.793537 systemd[1]: Starting systemd-sysctl.service... May 13 00:25:35.796292 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:25:35.800356 systemd[1]: Finished systemd-sysctl.service. May 13 00:25:35.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.804559 kernel: audit: type=1130 audit(1747095935.800:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.850542 kernel: Loading iSCSI transport class v2.0-870. May 13 00:25:35.862553 kernel: iscsi: registered transport (tcp) May 13 00:25:35.877633 kernel: iscsi: registered transport (qla4xxx) May 13 00:25:35.877648 kernel: QLogic iSCSI HBA Driver May 13 00:25:35.908223 systemd[1]: Finished dracut-cmdline.service. May 13 00:25:35.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.910007 systemd[1]: Starting dracut-pre-udev.service... May 13 00:25:35.913415 kernel: audit: type=1130 audit(1747095935.908:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:35.953563 kernel: raid6: neonx8 gen() 13737 MB/s May 13 00:25:35.970550 kernel: raid6: neonx8 xor() 10747 MB/s May 13 00:25:35.987544 kernel: raid6: neonx4 gen() 13529 MB/s May 13 00:25:36.004549 kernel: raid6: neonx4 xor() 11151 MB/s May 13 00:25:36.021550 kernel: raid6: neonx2 gen() 12949 MB/s May 13 00:25:36.038549 kernel: raid6: neonx2 xor() 10412 MB/s May 13 00:25:36.055557 kernel: raid6: neonx1 gen() 10589 MB/s May 13 00:25:36.072555 kernel: raid6: neonx1 xor() 8794 MB/s May 13 00:25:36.089550 kernel: raid6: int64x8 gen() 6269 MB/s May 13 00:25:36.106557 kernel: raid6: int64x8 xor() 3540 MB/s May 13 00:25:36.123546 kernel: raid6: int64x4 gen() 7215 MB/s May 13 00:25:36.140551 kernel: raid6: int64x4 xor() 3848 MB/s May 13 00:25:36.157549 kernel: raid6: int64x2 gen() 6146 MB/s May 13 00:25:36.174548 kernel: raid6: int64x2 xor() 3314 MB/s May 13 00:25:36.191547 kernel: raid6: int64x1 gen() 5043 MB/s May 13 00:25:36.208640 kernel: raid6: int64x1 xor() 2644 MB/s May 13 00:25:36.208650 kernel: raid6: using algorithm neonx8 gen() 13737 MB/s May 13 00:25:36.208659 kernel: raid6: .... xor() 10747 MB/s, rmw enabled May 13 00:25:36.209732 kernel: raid6: using neon recovery algorithm May 13 00:25:36.220930 kernel: xor: measuring software checksum speed May 13 00:25:36.220949 kernel: 8regs : 17209 MB/sec May 13 00:25:36.220957 kernel: 32regs : 19976 MB/sec May 13 00:25:36.221557 kernel: arm64_neon : 27710 MB/sec May 13 00:25:36.221567 kernel: xor: using function: arm64_neon (27710 MB/sec) May 13 00:25:36.275555 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:25:36.285795 systemd[1]: Finished dracut-pre-udev.service. May 13 00:25:36.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:36.289000 audit: BPF prog-id=7 op=LOAD May 13 00:25:36.289000 audit: BPF prog-id=8 op=LOAD May 13 00:25:36.290089 systemd[1]: Starting systemd-udevd.service... May 13 00:25:36.291539 kernel: audit: type=1130 audit(1747095936.286:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:36.303747 systemd-udevd[493]: Using default interface naming scheme 'v252'. May 13 00:25:36.307013 systemd[1]: Started systemd-udevd.service. May 13 00:25:36.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:36.309580 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:25:36.320119 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation May 13 00:25:36.344662 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:25:36.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:36.346315 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:25:36.378865 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:25:36.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:36.403545 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:25:36.409948 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:25:36.409964 kernel: GPT:9289727 != 19775487 May 13 00:25:36.409973 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:25:36.409982 kernel: GPT:9289727 != 19775487 May 13 00:25:36.409996 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:25:36.410004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:25:36.425553 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (547) May 13 00:25:36.426641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:25:36.427653 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:25:36.431913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:25:36.435363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:25:36.440531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:25:36.442196 systemd[1]: Starting disk-uuid.service... May 13 00:25:36.448125 disk-uuid[562]: Primary Header is updated. May 13 00:25:36.448125 disk-uuid[562]: Secondary Entries is updated. May 13 00:25:36.448125 disk-uuid[562]: Secondary Header is updated. May 13 00:25:36.456567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:25:36.460013 kernel: GPT:disk_guids don't match. May 13 00:25:36.460046 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:25:36.460055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:25:36.463550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:25:37.462544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:25:37.462839 disk-uuid[563]: The operation has completed successfully. May 13 00:25:37.483879 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:25:37.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.483972 systemd[1]: Finished disk-uuid.service. May 13 00:25:37.485548 systemd[1]: Starting verity-setup.service... May 13 00:25:37.503560 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:25:37.521507 systemd[1]: Found device dev-mapper-usr.device. May 13 00:25:37.526129 systemd[1]: Mounting sysusr-usr.mount... May 13 00:25:37.526968 systemd[1]: Finished verity-setup.service. May 13 00:25:37.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.574551 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:25:37.574999 systemd[1]: Mounted sysusr-usr.mount. May 13 00:25:37.575807 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:25:37.576449 systemd[1]: Starting ignition-setup.service... May 13 00:25:37.578719 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:25:37.584922 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:25:37.584953 kernel: BTRFS info (device vda6): using free space tree May 13 00:25:37.584967 kernel: BTRFS info (device vda6): has skinny extents May 13 00:25:37.592411 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:25:37.598230 systemd[1]: Finished ignition-setup.service. May 13 00:25:37.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.599872 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:25:37.653683 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:25:37.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.654000 audit: BPF prog-id=9 op=LOAD May 13 00:25:37.655858 systemd[1]: Starting systemd-networkd.service... May 13 00:25:37.683347 systemd-networkd[739]: lo: Link UP May 13 00:25:37.684274 systemd-networkd[739]: lo: Gained carrier May 13 00:25:37.685373 ignition[650]: Ignition 2.14.0 May 13 00:25:37.685382 ignition[650]: Stage: fetch-offline May 13 00:25:37.685432 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:37.685441 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:37.685615 ignition[650]: parsed url from cmdline: "" May 13 00:25:37.685618 ignition[650]: no config URL provided May 13 00:25:37.685623 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:25:37.685631 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 13 00:25:37.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.689747 systemd-networkd[739]: Enumeration completed May 13 00:25:37.685648 ignition[650]: op(1): [started] loading QEMU firmware config module May 13 00:25:37.689942 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:25:37.685653 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:25:37.690069 systemd[1]: Started systemd-networkd.service. May 13 00:25:37.692193 systemd[1]: Reached target network.target. May 13 00:25:37.694404 systemd[1]: Starting iscsiuio.service... May 13 00:25:37.696855 systemd-networkd[739]: eth0: Link UP May 13 00:25:37.696858 systemd-networkd[739]: eth0: Gained carrier May 13 00:25:37.704220 ignition[650]: op(1): [finished] loading QEMU firmware config module May 13 00:25:37.710443 systemd[1]: Started iscsiuio.service. May 13 00:25:37.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.712135 systemd[1]: Starting iscsid.service... May 13 00:25:37.716281 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:25:37.716281 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:25:37.716281 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:25:37.716281 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:25:37.716281 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:25:37.716281 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:25:37.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.721055 systemd[1]: Started iscsid.service. May 13 00:25:37.723635 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:25:37.726492 systemd[1]: Starting dracut-initqueue.service... May 13 00:25:37.737190 systemd[1]: Finished dracut-initqueue.service. May 13 00:25:37.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.738234 systemd[1]: Reached target remote-fs-pre.target. May 13 00:25:37.739738 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:25:37.741420 systemd[1]: Reached target remote-fs.target. May 13 00:25:37.743972 ignition[650]: parsing config with SHA512: 2c25da2c305c591e8b9f7583157813898978afe3bf1d896aa93d6ed90d78858f432adf40894d680683f6ef395bdd8a71b363c204759afc158e4277204b4e9f1a May 13 00:25:37.743985 systemd[1]: Starting dracut-pre-mount.service... May 13 00:25:37.752822 unknown[650]: fetched base config from "system" May 13 00:25:37.752832 unknown[650]: fetched user config from "qemu" May 13 00:25:37.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.753236 ignition[650]: fetch-offline: fetch-offline passed May 13 00:25:37.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.753941 systemd[1]: Finished dracut-pre-mount.service. May 13 00:25:37.753286 ignition[650]: Ignition finished successfully May 13 00:25:37.755087 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:25:37.756498 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:25:37.757304 systemd[1]: Starting ignition-kargs.service... May 13 00:25:37.766175 ignition[760]: Ignition 2.14.0 May 13 00:25:37.766190 ignition[760]: Stage: kargs May 13 00:25:37.766289 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:37.766299 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:37.768745 systemd[1]: Finished ignition-kargs.service. May 13 00:25:37.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.767098 ignition[760]: kargs: kargs passed May 13 00:25:37.767139 ignition[760]: Ignition finished successfully May 13 00:25:37.771080 systemd[1]: Starting ignition-disks.service... May 13 00:25:37.777633 ignition[766]: Ignition 2.14.0 May 13 00:25:37.777644 ignition[766]: Stage: disks May 13 00:25:37.777734 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:37.779733 systemd[1]: Finished ignition-disks.service. May 13 00:25:37.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.777743 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:37.781249 systemd[1]: Reached target initrd-root-device.target. May 13 00:25:37.778590 ignition[766]: disks: disks passed May 13 00:25:37.782551 systemd[1]: Reached target local-fs-pre.target. May 13 00:25:37.778631 ignition[766]: Ignition finished successfully May 13 00:25:37.784184 systemd[1]: Reached target local-fs.target. May 13 00:25:37.785554 systemd[1]: Reached target sysinit.target. May 13 00:25:37.786698 systemd[1]: Reached target basic.target. May 13 00:25:37.788846 systemd[1]: Starting systemd-fsck-root.service... May 13 00:25:37.799312 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:25:37.803067 systemd[1]: Finished systemd-fsck-root.service. May 13 00:25:37.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.807832 systemd[1]: Mounting sysroot.mount... May 13 00:25:37.816546 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:25:37.816716 systemd[1]: Mounted sysroot.mount. May 13 00:25:37.817456 systemd[1]: Reached target initrd-root-fs.target. May 13 00:25:37.819645 systemd[1]: Mounting sysroot-usr.mount... May 13 00:25:37.820454 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:25:37.820494 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:25:37.820518 systemd[1]: Reached target ignition-diskful.target. May 13 00:25:37.822429 systemd[1]: Mounted sysroot-usr.mount. May 13 00:25:37.824209 systemd[1]: Starting initrd-setup-root.service... May 13 00:25:37.828449 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:25:37.832014 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory May 13 00:25:37.836340 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:25:37.840365 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:25:37.867142 systemd[1]: Finished initrd-setup-root.service. May 13 00:25:37.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.868823 systemd[1]: Starting ignition-mount.service... May 13 00:25:37.870197 systemd[1]: Starting sysroot-boot.service... May 13 00:25:37.874956 bash[826]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:25:37.883971 ignition[828]: INFO : Ignition 2.14.0 May 13 00:25:37.883971 ignition[828]: INFO : Stage: mount May 13 00:25:37.885500 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:37.885500 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:37.885500 ignition[828]: INFO : mount: mount passed May 13 00:25:37.885500 ignition[828]: INFO : Ignition finished successfully May 13 00:25:37.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:37.886147 systemd[1]: Finished ignition-mount.service. May 13 00:25:37.891749 systemd[1]: Finished sysroot-boot.service. May 13 00:25:37.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:38.535647 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:25:38.542338 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 13 00:25:38.542370 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:25:38.542380 kernel: BTRFS info (device vda6): using free space tree May 13 00:25:38.543033 kernel: BTRFS info (device vda6): has skinny extents May 13 00:25:38.546344 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:25:38.547839 systemd[1]: Starting ignition-files.service... May 13 00:25:38.561578 ignition[857]: INFO : Ignition 2.14.0 May 13 00:25:38.561578 ignition[857]: INFO : Stage: files May 13 00:25:38.563154 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:38.563154 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:38.563154 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 13 00:25:38.566701 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:25:38.566701 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:25:38.569825 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:25:38.571195 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:25:38.571195 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:25:38.571195 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:25:38.571195 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:25:38.570585 unknown[857]: wrote ssh authorized keys file for user: core May 13 00:25:38.632561 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:25:38.799984 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:25:38.801817 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:25:38.803509 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:25:39.092120 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:25:39.250702 systemd-networkd[739]: eth0: Gained IPv6LL May 13 00:25:39.454641 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:25:39.454641 ignition[857]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:25:39.458492 ignition[857]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:39.486342 ignition[857]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:39.487975 ignition[857]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:25:39.487975 ignition[857]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:39.487975 ignition[857]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:39.487975 ignition[857]: INFO : files: files passed May 13 00:25:39.487975 ignition[857]: INFO : Ignition finished successfully May 13 00:25:39.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.487843 systemd[1]: Finished ignition-files.service. May 13 00:25:39.489625 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:25:39.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.491052 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:25:39.502642 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:25:39.491701 systemd[1]: Starting ignition-quench.service... May 13 00:25:39.505518 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:39.494941 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:25:39.495023 systemd[1]: Finished ignition-quench.service. May 13 00:25:39.498082 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:25:39.499304 systemd[1]: Reached target ignition-complete.target. May 13 00:25:39.501586 systemd[1]: Starting initrd-parse-etc.service... May 13 00:25:39.514520 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:25:39.514624 systemd[1]: Finished initrd-parse-etc.service. May 13 00:25:39.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.516382 systemd[1]: Reached target initrd-fs.target. May 13 00:25:39.517684 systemd[1]: Reached target initrd.target. May 13 00:25:39.518936 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:25:39.519649 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:25:39.529310 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:25:39.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.530857 systemd[1]: Starting initrd-cleanup.service... May 13 00:25:39.538446 systemd[1]: Stopped target nss-lookup.target. May 13 00:25:39.539405 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:25:39.540952 systemd[1]: Stopped target timers.target. May 13 00:25:39.542289 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:25:39.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.542406 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:25:39.543662 systemd[1]: Stopped target initrd.target. May 13 00:25:39.545068 systemd[1]: Stopped target basic.target. May 13 00:25:39.546315 systemd[1]: Stopped target ignition-complete.target. May 13 00:25:39.547731 systemd[1]: Stopped target ignition-diskful.target. May 13 00:25:39.549082 systemd[1]: Stopped target initrd-root-device.target. May 13 00:25:39.550504 systemd[1]: Stopped target remote-fs.target. May 13 00:25:39.551891 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:25:39.553390 systemd[1]: Stopped target sysinit.target. May 13 00:25:39.554749 systemd[1]: Stopped target local-fs.target. May 13 00:25:39.556071 systemd[1]: Stopped target local-fs-pre.target. May 13 00:25:39.557408 systemd[1]: Stopped target swap.target. May 13 00:25:39.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.558732 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:25:39.558846 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:25:39.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.560256 systemd[1]: Stopped target cryptsetup.target. May 13 00:25:39.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.561412 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:25:39.561518 systemd[1]: Stopped dracut-initqueue.service. May 13 00:25:39.563049 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:25:39.563144 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:25:39.564479 systemd[1]: Stopped target paths.target. May 13 00:25:39.565646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:25:39.569563 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:25:39.571288 systemd[1]: Stopped target slices.target. May 13 00:25:39.572650 systemd[1]: Stopped target sockets.target. May 13 00:25:39.573863 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:25:39.573931 systemd[1]: Closed iscsid.socket. May 13 00:25:39.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.575056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:25:39.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.575156 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:25:39.576677 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:25:39.576769 systemd[1]: Stopped ignition-files.service. May 13 00:25:39.578744 systemd[1]: Stopping ignition-mount.service... May 13 00:25:39.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.580351 systemd[1]: Stopping iscsiuio.service... May 13 00:25:39.582120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:25:39.586291 ignition[897]: INFO : Ignition 2.14.0 May 13 00:25:39.586291 ignition[897]: INFO : Stage: umount May 13 00:25:39.586291 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:39.586291 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:25:39.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.582245 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:25:39.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.593274 ignition[897]: INFO : umount: umount passed May 13 00:25:39.593274 ignition[897]: INFO : Ignition finished successfully May 13 00:25:39.584296 systemd[1]: Stopping sysroot-boot.service... May 13 00:25:39.585451 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:25:39.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.585586 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:25:39.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.587131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:25:39.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.587230 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:25:39.589742 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:25:39.589836 systemd[1]: Stopped iscsiuio.service. May 13 00:25:39.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.591004 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:25:39.591081 systemd[1]: Stopped ignition-mount.service. May 13 00:25:39.593481 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:25:39.594032 systemd[1]: Stopped target network.target. May 13 00:25:39.594905 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:25:39.594939 systemd[1]: Closed iscsiuio.socket. May 13 00:25:39.596346 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:25:39.596395 systemd[1]: Stopped ignition-disks.service. May 13 00:25:39.597726 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:25:39.597774 systemd[1]: Stopped ignition-kargs.service. May 13 00:25:39.599212 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:25:39.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.599252 systemd[1]: Stopped ignition-setup.service. May 13 00:25:39.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.600700 systemd[1]: Stopping systemd-networkd.service... May 13 00:25:39.602334 systemd[1]: Stopping systemd-resolved.service... May 13 00:25:39.603722 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:25:39.623000 audit: BPF prog-id=6 op=UNLOAD May 13 00:25:39.603810 systemd[1]: Finished initrd-cleanup.service. May 13 00:25:39.612065 systemd-networkd[739]: eth0: DHCPv6 lease lost May 13 00:25:39.624000 audit: BPF prog-id=9 op=UNLOAD May 13 00:25:39.613862 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:25:39.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.613958 systemd[1]: Stopped systemd-networkd.service. May 13 00:25:39.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.617946 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:25:39.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.618027 systemd[1]: Stopped systemd-resolved.service. May 13 00:25:39.620874 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:25:39.620902 systemd[1]: Closed systemd-networkd.socket. May 13 00:25:39.622629 systemd[1]: Stopping network-cleanup.service... May 13 00:25:39.625847 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:25:39.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.625905 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:25:39.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.627373 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:25:39.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.627416 systemd[1]: Stopped systemd-sysctl.service. May 13 00:25:39.630389 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:25:39.630432 systemd[1]: Stopped systemd-modules-load.service. May 13 00:25:39.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.631379 systemd[1]: Stopping systemd-udevd.service... May 13 00:25:39.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.635798 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:25:39.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.637180 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:25:39.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.637263 systemd[1]: Stopped sysroot-boot.service. May 13 00:25:39.639220 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:25:39.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.639301 systemd[1]: Stopped network-cleanup.service. May 13 00:25:39.640821 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:25:39.640936 systemd[1]: Stopped systemd-udevd.service. May 13 00:25:39.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:39.642261 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:25:39.642293 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:25:39.643481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:25:39.643513 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:25:39.644769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:25:39.644812 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:25:39.646232 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:25:39.646269 systemd[1]: Stopped dracut-cmdline.service. May 13 00:25:39.647571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:25:39.647610 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:25:39.648914 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:25:39.648953 systemd[1]: Stopped initrd-setup-root.service. May 13 00:25:39.651211 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:25:39.652164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:25:39.652213 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:25:39.655938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:25:39.656018 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:25:39.656924 systemd[1]: Reached target initrd-switch-root.target. May 13 00:25:39.658857 systemd[1]: Starting initrd-switch-root.service... May 13 00:25:39.664562 systemd[1]: Switching root. May 13 00:25:39.686029 iscsid[746]: iscsid shutting down. May 13 00:25:39.686687 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 13 00:25:39.686731 systemd-journald[289]: Journal stopped May 13 00:25:41.691592 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:25:41.691646 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:25:41.691658 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:25:41.691669 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:25:41.691678 kernel: SELinux: policy capability open_perms=1 May 13 00:25:41.691690 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:25:41.691702 kernel: SELinux: policy capability always_check_network=0 May 13 00:25:41.691712 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:25:41.691722 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:25:41.691731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:25:41.691740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:25:41.691751 systemd[1]: Successfully loaded SELinux policy in 34.455ms. May 13 00:25:41.691771 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.670ms. May 13 00:25:41.691784 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:25:41.691796 systemd[1]: Detected virtualization kvm. May 13 00:25:41.691807 systemd[1]: Detected architecture arm64. May 13 00:25:41.691817 systemd[1]: Detected first boot. May 13 00:25:41.691828 systemd[1]: Initializing machine ID from VM UUID. May 13 00:25:41.691838 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:25:41.691849 systemd[1]: Populated /etc with preset unit settings. May 13 00:25:41.691860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:25:41.691876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:25:41.691888 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:25:41.691899 kernel: kauditd_printk_skb: 79 callbacks suppressed May 13 00:25:41.691912 kernel: audit: type=1334 audit(1747095941.537:83): prog-id=12 op=LOAD May 13 00:25:41.691922 kernel: audit: type=1334 audit(1747095941.537:84): prog-id=3 op=UNLOAD May 13 00:25:41.691932 kernel: audit: type=1334 audit(1747095941.537:85): prog-id=13 op=LOAD May 13 00:25:41.691943 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:25:41.691954 kernel: audit: type=1334 audit(1747095941.538:86): prog-id=14 op=LOAD May 13 00:25:41.691964 systemd[1]: Stopped iscsid.service. May 13 00:25:41.691975 kernel: audit: type=1334 audit(1747095941.538:87): prog-id=4 op=UNLOAD May 13 00:25:41.691986 kernel: audit: type=1334 audit(1747095941.538:88): prog-id=5 op=UNLOAD May 13 00:25:41.691998 kernel: audit: type=1131 audit(1747095941.540:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.692009 kernel: audit: type=1131 audit(1747095941.547:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.692019 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:25:41.692030 systemd[1]: Stopped initrd-switch-root.service. May 13 00:25:41.692042 kernel: audit: type=1130 audit(1747095941.552:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.692052 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:25:41.692063 kernel: audit: type=1131 audit(1747095941.552:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.692074 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:25:41.692085 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:25:41.692097 systemd[1]: Created slice system-getty.slice. May 13 00:25:41.692107 systemd[1]: Created slice system-modprobe.slice. May 13 00:25:41.692118 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:25:41.692128 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:25:41.692139 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:25:41.692149 systemd[1]: Created slice user.slice. May 13 00:25:41.692159 systemd[1]: Started systemd-ask-password-console.path. May 13 00:25:41.692171 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:25:41.692181 systemd[1]: Set up automount boot.automount. May 13 00:25:41.692193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:25:41.692204 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:25:41.692214 systemd[1]: Stopped target initrd-fs.target. May 13 00:25:41.692225 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:25:41.692235 systemd[1]: Reached target integritysetup.target. May 13 00:25:41.692246 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:25:41.692257 systemd[1]: Reached target remote-fs.target. May 13 00:25:41.692268 systemd[1]: Reached target slices.target. May 13 00:25:41.692278 systemd[1]: Reached target swap.target. May 13 00:25:41.692290 systemd[1]: Reached target torcx.target. May 13 00:25:41.692301 systemd[1]: Reached target veritysetup.target. May 13 00:25:41.692317 systemd[1]: Listening on systemd-coredump.socket. May 13 00:25:41.692329 systemd[1]: Listening on systemd-initctl.socket. May 13 00:25:41.692340 systemd[1]: Listening on systemd-networkd.socket. May 13 00:25:41.692350 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:25:41.692361 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:25:41.692371 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:25:41.692381 systemd[1]: Mounting dev-hugepages.mount... May 13 00:25:41.692391 systemd[1]: Mounting dev-mqueue.mount... May 13 00:25:41.692405 systemd[1]: Mounting media.mount... May 13 00:25:41.692416 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:25:41.692427 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:25:41.692437 systemd[1]: Mounting tmp.mount... May 13 00:25:41.692448 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:25:41.692458 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:25:41.692468 systemd[1]: Starting kmod-static-nodes.service... May 13 00:25:41.692479 systemd[1]: Starting modprobe@configfs.service... May 13 00:25:41.692489 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:25:41.692501 systemd[1]: Starting modprobe@drm.service... May 13 00:25:41.692512 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:25:41.692530 systemd[1]: Starting modprobe@fuse.service... May 13 00:25:41.692543 systemd[1]: Starting modprobe@loop.service... May 13 00:25:41.692554 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:25:41.692564 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:25:41.692575 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:25:41.692585 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:25:41.692596 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:25:41.692606 kernel: fuse: init (API version 7.34) May 13 00:25:41.692616 systemd[1]: Stopped systemd-journald.service. May 13 00:25:41.692627 kernel: loop: module loaded May 13 00:25:41.692638 systemd[1]: Starting systemd-journald.service... May 13 00:25:41.692649 systemd[1]: Starting systemd-modules-load.service... May 13 00:25:41.692660 systemd[1]: Starting systemd-network-generator.service... May 13 00:25:41.692670 systemd[1]: Starting systemd-remount-fs.service... May 13 00:25:41.692680 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:25:41.692690 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:25:41.692702 systemd[1]: Stopped verity-setup.service. May 13 00:25:41.692712 systemd[1]: Mounted dev-hugepages.mount. May 13 00:25:41.692723 systemd[1]: Mounted dev-mqueue.mount. May 13 00:25:41.692737 systemd[1]: Mounted media.mount. May 13 00:25:41.692748 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:25:41.692759 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:25:41.692769 systemd[1]: Mounted tmp.mount. May 13 00:25:41.692780 systemd[1]: Finished kmod-static-nodes.service. May 13 00:25:41.692791 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:25:41.692803 systemd[1]: Finished modprobe@configfs.service. May 13 00:25:41.692817 systemd-journald[993]: Journal started May 13 00:25:41.692860 systemd-journald[993]: Runtime Journal (/run/log/journal/65e679f3d89542d289a3cc00b67332ab) is 6.0M, max 48.7M, 42.6M free. May 13 00:25:39.742000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:25:39.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:25:39.822000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:25:39.822000 audit: BPF prog-id=10 op=LOAD May 13 00:25:39.822000 audit: BPF prog-id=10 op=UNLOAD May 13 00:25:39.822000 audit: BPF prog-id=11 op=LOAD May 13 00:25:39.822000 audit: BPF prog-id=11 op=UNLOAD May 13 00:25:39.862000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:25:39.862000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:25:39.862000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:25:39.863000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:25:39.863000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:25:39.863000 audit: CWD cwd="/" May 13 00:25:39.863000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:25:39.863000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:25:39.863000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:25:41.537000 audit: BPF prog-id=12 op=LOAD May 13 00:25:41.537000 audit: BPF prog-id=3 op=UNLOAD May 13 00:25:41.537000 audit: BPF prog-id=13 op=LOAD May 13 00:25:41.538000 audit: BPF prog-id=14 op=LOAD May 13 00:25:41.538000 audit: BPF prog-id=4 op=UNLOAD May 13 00:25:41.538000 audit: BPF prog-id=5 op=UNLOAD May 13 00:25:41.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.560000 audit: BPF prog-id=12 op=UNLOAD May 13 00:25:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.661000 audit: BPF prog-id=15 op=LOAD May 13 00:25:41.661000 audit: BPF prog-id=16 op=LOAD May 13 00:25:41.661000 audit: BPF prog-id=17 op=LOAD May 13 00:25:41.661000 audit: BPF prog-id=13 op=UNLOAD May 13 00:25:41.661000 audit: BPF prog-id=14 op=UNLOAD May 13 00:25:41.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.690000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:25:41.690000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffcbdcd960 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:25:41.690000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:25:41.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.536030 systemd[1]: Queued start job for default target multi-user.target. May 13 00:25:39.860950 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:25:41.536042 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:25:39.861217 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:25:41.540362 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:25:39.861237 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:25:39.861267 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:25:39.861276 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:25:39.861302 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:25:39.861314 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:25:39.861517 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:25:39.861561 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:25:39.861574 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:25:39.862301 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:25:39.862347 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:25:39.862366 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:25:39.862380 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:25:41.695111 systemd[1]: Started systemd-journald.service. May 13 00:25:39.862397 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:25:39.862411 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:25:41.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.291290 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:25:41.291561 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:25:41.291658 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:25:41.291816 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:25:41.291863 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:25:41.291915 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-13T00:25:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:25:41.695785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:41.695939 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:25:41.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.697007 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:25:41.697152 systemd[1]: Finished modprobe@drm.service. May 13 00:25:41.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.698144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:41.698296 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:25:41.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.699478 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:25:41.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.700506 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:25:41.700668 systemd[1]: Finished modprobe@fuse.service. May 13 00:25:41.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.701672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:41.701810 systemd[1]: Finished modprobe@loop.service. May 13 00:25:41.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.702819 systemd[1]: Finished systemd-modules-load.service. May 13 00:25:41.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.704013 systemd[1]: Finished systemd-network-generator.service. May 13 00:25:41.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.705253 systemd[1]: Finished systemd-remount-fs.service. May 13 00:25:41.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.706498 systemd[1]: Reached target network-pre.target. May 13 00:25:41.708352 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:25:41.710208 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:25:41.711092 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:25:41.715512 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:25:41.717232 systemd[1]: Starting systemd-journal-flush.service... May 13 00:25:41.718188 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:25:41.719287 systemd[1]: Starting systemd-random-seed.service... May 13 00:25:41.720262 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:25:41.721295 systemd[1]: Starting systemd-sysctl.service... May 13 00:25:41.724119 systemd[1]: Starting systemd-sysusers.service... May 13 00:25:41.724904 systemd-journald[993]: Time spent on flushing to /var/log/journal/65e679f3d89542d289a3cc00b67332ab is 12.886ms for 991 entries. May 13 00:25:41.724904 systemd-journald[993]: System Journal (/var/log/journal/65e679f3d89542d289a3cc00b67332ab) is 8.0M, max 195.6M, 187.6M free. May 13 00:25:41.747062 systemd-journald[993]: Received client request to flush runtime journal. May 13 00:25:41.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.727989 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:25:41.729013 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:25:41.748634 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:25:41.730192 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:25:41.732906 systemd[1]: Starting systemd-udev-settle.service... May 13 00:25:41.733963 systemd[1]: Finished systemd-random-seed.service. May 13 00:25:41.734902 systemd[1]: Reached target first-boot-complete.target. May 13 00:25:41.748353 systemd[1]: Finished systemd-journal-flush.service. May 13 00:25:41.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.749710 systemd[1]: Finished systemd-sysctl.service. May 13 00:25:41.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:41.752258 systemd[1]: Finished systemd-sysusers.service. May 13 00:25:41.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.084291 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:25:42.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.085000 audit: BPF prog-id=18 op=LOAD May 13 00:25:42.085000 audit: BPF prog-id=19 op=LOAD May 13 00:25:42.085000 audit: BPF prog-id=7 op=UNLOAD May 13 00:25:42.085000 audit: BPF prog-id=8 op=UNLOAD May 13 00:25:42.086483 systemd[1]: Starting systemd-udevd.service... May 13 00:25:42.111891 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 13 00:25:42.122991 systemd[1]: Started systemd-udevd.service. May 13 00:25:42.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.124000 audit: BPF prog-id=20 op=LOAD May 13 00:25:42.127038 systemd[1]: Starting systemd-networkd.service... May 13 00:25:42.137000 audit: BPF prog-id=21 op=LOAD May 13 00:25:42.137000 audit: BPF prog-id=22 op=LOAD May 13 00:25:42.137000 audit: BPF prog-id=23 op=LOAD May 13 00:25:42.138273 systemd[1]: Starting systemd-userdbd.service... May 13 00:25:42.139495 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 13 00:25:42.174522 systemd[1]: Started systemd-userdbd.service. May 13 00:25:42.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.197326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:25:42.228852 systemd[1]: Finished systemd-udev-settle.service. May 13 00:25:42.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.230868 systemd[1]: Starting lvm2-activation-early.service... May 13 00:25:42.232771 systemd-networkd[1042]: lo: Link UP May 13 00:25:42.232779 systemd-networkd[1042]: lo: Gained carrier May 13 00:25:42.233382 systemd-networkd[1042]: Enumeration completed May 13 00:25:42.233511 systemd[1]: Started systemd-networkd.service. May 13 00:25:42.233582 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:25:42.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.238468 systemd-networkd[1042]: eth0: Link UP May 13 00:25:42.238477 systemd-networkd[1042]: eth0: Gained carrier May 13 00:25:42.248893 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:25:42.262628 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:25:42.278997 systemd[1]: Finished lvm2-activation-early.service. May 13 00:25:42.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.279960 systemd[1]: Reached target cryptsetup.target. May 13 00:25:42.281744 systemd[1]: Starting lvm2-activation.service... May 13 00:25:42.285185 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:25:42.316451 systemd[1]: Finished lvm2-activation.service. May 13 00:25:42.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.317462 systemd[1]: Reached target local-fs-pre.target. May 13 00:25:42.318322 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:25:42.318350 systemd[1]: Reached target local-fs.target. May 13 00:25:42.319089 systemd[1]: Reached target machines.target. May 13 00:25:42.320973 systemd[1]: Starting ldconfig.service... May 13 00:25:42.322027 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:25:42.322075 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.323054 systemd[1]: Starting systemd-boot-update.service... May 13 00:25:42.324874 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:25:42.326994 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:25:42.328933 systemd[1]: Starting systemd-sysext.service... May 13 00:25:42.331002 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 13 00:25:42.332254 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:25:42.333988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:25:42.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.340863 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:25:42.346433 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:25:42.346994 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:25:42.367564 kernel: loop0: detected capacity change from 0 to 194096 May 13 00:25:42.411254 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:25:42.412102 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:25:42.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.415605 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) May 13 00:25:42.415605 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:25:42.416556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:25:42.418469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:25:42.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.421586 systemd[1]: Mounting boot.mount... May 13 00:25:42.429811 systemd[1]: Mounted boot.mount. May 13 00:25:42.433562 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:25:42.439690 (sd-sysext)[1085]: Using extensions 'kubernetes'. May 13 00:25:42.441952 (sd-sysext)[1085]: Merged extensions into '/usr'. May 13 00:25:42.443978 systemd[1]: Finished systemd-boot-update.service. May 13 00:25:42.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.467065 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:25:42.469437 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:25:42.471878 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:25:42.473765 systemd[1]: Starting modprobe@loop.service... May 13 00:25:42.474595 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:25:42.474715 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.475484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:42.475662 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:25:42.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.477036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:42.477144 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:25:42.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.478581 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:42.478682 systemd[1]: Finished modprobe@loop.service. May 13 00:25:42.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.480037 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:25:42.480135 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:25:42.520953 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:25:42.525087 systemd[1]: Finished ldconfig.service. May 13 00:25:42.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.681679 systemd[1]: Mounting usr-share-oem.mount... May 13 00:25:42.686545 systemd[1]: Mounted usr-share-oem.mount. May 13 00:25:42.688329 systemd[1]: Finished systemd-sysext.service. May 13 00:25:42.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.690322 systemd[1]: Starting ensure-sysext.service... May 13 00:25:42.691953 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:25:42.695970 systemd[1]: Reloading. May 13 00:25:42.704587 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:25:42.706974 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:25:42.709594 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:25:42.734280 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-13T00:25:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:25:42.734317 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-13T00:25:42Z" level=info msg="torcx already run" May 13 00:25:42.789338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:25:42.789358 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:25:42.804909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:25:42.846000 audit: BPF prog-id=24 op=LOAD May 13 00:25:42.846000 audit: BPF prog-id=15 op=UNLOAD May 13 00:25:42.846000 audit: BPF prog-id=25 op=LOAD May 13 00:25:42.846000 audit: BPF prog-id=26 op=LOAD May 13 00:25:42.846000 audit: BPF prog-id=16 op=UNLOAD May 13 00:25:42.846000 audit: BPF prog-id=17 op=UNLOAD May 13 00:25:42.847000 audit: BPF prog-id=27 op=LOAD May 13 00:25:42.847000 audit: BPF prog-id=21 op=UNLOAD May 13 00:25:42.847000 audit: BPF prog-id=28 op=LOAD May 13 00:25:42.847000 audit: BPF prog-id=29 op=LOAD May 13 00:25:42.847000 audit: BPF prog-id=22 op=UNLOAD May 13 00:25:42.847000 audit: BPF prog-id=23 op=UNLOAD May 13 00:25:42.848000 audit: BPF prog-id=30 op=LOAD May 13 00:25:42.848000 audit: BPF prog-id=20 op=UNLOAD May 13 00:25:42.849000 audit: BPF prog-id=31 op=LOAD May 13 00:25:42.849000 audit: BPF prog-id=32 op=LOAD May 13 00:25:42.849000 audit: BPF prog-id=18 op=UNLOAD May 13 00:25:42.849000 audit: BPF prog-id=19 op=UNLOAD May 13 00:25:42.852036 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:25:42.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.856359 systemd[1]: Starting audit-rules.service... May 13 00:25:42.858299 systemd[1]: Starting clean-ca-certificates.service... May 13 00:25:42.860254 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:25:42.862000 audit: BPF prog-id=33 op=LOAD May 13 00:25:42.864344 systemd[1]: Starting systemd-resolved.service... May 13 00:25:42.865000 audit: BPF prog-id=34 op=LOAD May 13 00:25:42.866918 systemd[1]: Starting systemd-timesyncd.service... May 13 00:25:42.868639 systemd[1]: Starting systemd-update-utmp.service... May 13 00:25:42.869952 systemd[1]: Finished clean-ca-certificates.service. May 13 00:25:42.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.873041 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:25:42.874000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:25:42.875375 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:25:42.876750 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:25:42.878744 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:25:42.880850 systemd[1]: Starting modprobe@loop.service... May 13 00:25:42.881604 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:25:42.881733 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.881823 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:25:42.882594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:42.882722 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:25:42.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.884022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:42.884129 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:25:42.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.885420 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:25:42.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.887145 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:42.887253 systemd[1]: Finished modprobe@loop.service. May 13 00:25:42.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:25:42.892600 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:25:42.893734 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:25:42.895564 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:25:42.897373 systemd[1]: Starting modprobe@loop.service... May 13 00:25:42.898130 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:25:42.898247 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.899560 systemd[1]: Starting systemd-update-done.service... May 13 00:25:42.900374 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:25:42.900000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:25:42.900000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6792eb0 a2=420 a3=0 items=0 ppid=1151 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:25:42.900000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:25:42.905372 augenrules[1176]: No rules May 13 00:25:42.901350 systemd[1]: Finished systemd-update-utmp.service. May 13 00:25:42.902681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:42.902791 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:25:42.903964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:42.904077 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:25:42.905293 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:42.905418 systemd[1]: Finished modprobe@loop.service. May 13 00:25:42.906661 systemd[1]: Finished audit-rules.service. May 13 00:25:42.907895 systemd[1]: Finished systemd-update-done.service. May 13 00:25:42.911849 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:25:42.913084 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:25:42.915099 systemd[1]: Starting modprobe@drm.service... May 13 00:25:42.916954 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:25:42.918826 systemd[1]: Starting modprobe@loop.service... May 13 00:25:42.919679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:25:42.919796 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.921443 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:25:42.922515 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:25:42.923596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:42.923718 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:25:42.925030 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:25:42.925166 systemd[1]: Finished modprobe@drm.service. May 13 00:25:42.926403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:42.926518 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:25:42.927827 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:42.927939 systemd[1]: Finished modprobe@loop.service. May 13 00:25:42.929286 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:25:42.929410 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:25:42.931614 systemd[1]: Finished ensure-sysext.service. May 13 00:25:42.935468 systemd[1]: Started systemd-timesyncd.service. May 13 00:25:42.936266 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:25:42.936328 systemd-timesyncd[1161]: Initial clock synchronization to Tue 2025-05-13 00:25:42.567399 UTC. May 13 00:25:42.936881 systemd[1]: Reached target time-set.target. May 13 00:25:42.939006 systemd-resolved[1155]: Positive Trust Anchors: May 13 00:25:42.939018 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:25:42.939045 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:25:42.953730 systemd-resolved[1155]: Defaulting to hostname 'linux'. May 13 00:25:42.955048 systemd[1]: Started systemd-resolved.service. May 13 00:25:42.955961 systemd[1]: Reached target network.target. May 13 00:25:42.956711 systemd[1]: Reached target nss-lookup.target. May 13 00:25:42.957482 systemd[1]: Reached target sysinit.target. May 13 00:25:42.958346 systemd[1]: Started motdgen.path. May 13 00:25:42.959071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:25:42.960285 systemd[1]: Started logrotate.timer. May 13 00:25:42.961108 systemd[1]: Started mdadm.timer. May 13 00:25:42.961775 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:25:42.962600 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:25:42.962623 systemd[1]: Reached target paths.target. May 13 00:25:42.963336 systemd[1]: Reached target timers.target. May 13 00:25:42.964372 systemd[1]: Listening on dbus.socket. May 13 00:25:42.966089 systemd[1]: Starting docker.socket... May 13 00:25:42.970108 systemd[1]: Listening on sshd.socket. May 13 00:25:42.970981 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.971424 systemd[1]: Listening on docker.socket. May 13 00:25:42.972254 systemd[1]: Reached target sockets.target. May 13 00:25:42.973016 systemd[1]: Reached target basic.target. May 13 00:25:42.973793 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:25:42.973825 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:25:42.974746 systemd[1]: Starting containerd.service... May 13 00:25:42.976359 systemd[1]: Starting dbus.service... May 13 00:25:42.977974 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:25:42.979893 systemd[1]: Starting extend-filesystems.service... May 13 00:25:42.980808 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:25:42.982050 systemd[1]: Starting motdgen.service... May 13 00:25:42.984564 systemd[1]: Starting prepare-helm.service... May 13 00:25:42.987246 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:25:42.989228 systemd[1]: Starting sshd-keygen.service... May 13 00:25:42.990629 jq[1193]: false May 13 00:25:42.992006 systemd[1]: Starting systemd-logind.service... May 13 00:25:42.992706 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:25:42.992783 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:25:42.993165 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:25:42.993872 systemd[1]: Starting update-engine.service... May 13 00:25:42.995691 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:25:43.000297 jq[1208]: true May 13 00:25:43.001018 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:25:43.001177 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:25:43.002199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:25:43.002361 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:25:43.015745 jq[1212]: true May 13 00:25:43.019546 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:25:43.019708 systemd[1]: Finished motdgen.service. May 13 00:25:43.020844 tar[1211]: linux-arm64/helm May 13 00:25:43.021557 extend-filesystems[1194]: Found loop1 May 13 00:25:43.021557 extend-filesystems[1194]: Found vda May 13 00:25:43.021557 extend-filesystems[1194]: Found vda1 May 13 00:25:43.021557 extend-filesystems[1194]: Found vda2 May 13 00:25:43.021557 extend-filesystems[1194]: Found vda3 May 13 00:25:43.021557 extend-filesystems[1194]: Found usr May 13 00:25:43.030812 extend-filesystems[1194]: Found vda4 May 13 00:25:43.030812 extend-filesystems[1194]: Found vda6 May 13 00:25:43.030812 extend-filesystems[1194]: Found vda7 May 13 00:25:43.030812 extend-filesystems[1194]: Found vda9 May 13 00:25:43.030812 extend-filesystems[1194]: Checking size of /dev/vda9 May 13 00:25:43.038676 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:25:43.039251 systemd-logind[1202]: New seat seat0. May 13 00:25:43.050197 dbus-daemon[1192]: [system] SELinux support is enabled May 13 00:25:43.050378 systemd[1]: Started dbus.service. May 13 00:25:43.053656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:25:43.053713 systemd[1]: Reached target system-config.target. May 13 00:25:43.054679 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:25:43.054713 systemd[1]: Reached target user-config.target. May 13 00:25:43.056749 extend-filesystems[1194]: Resized partition /dev/vda9 May 13 00:25:43.062242 dbus-daemon[1192]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 00:25:43.065918 systemd[1]: Started systemd-logind.service. May 13 00:25:43.076310 extend-filesystems[1242]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:25:43.083540 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:25:43.101219 update_engine[1206]: I0513 00:25:43.100894 1206 main.cc:92] Flatcar Update Engine starting May 13 00:25:43.106453 systemd[1]: Started update-engine.service. May 13 00:25:43.108536 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:25:43.109782 update_engine[1206]: I0513 00:25:43.109666 1206 update_check_scheduler.cc:74] Next update check in 2m30s May 13 00:25:43.110906 systemd[1]: Started locksmithd.service. May 13 00:25:43.124646 extend-filesystems[1242]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:25:43.124646 extend-filesystems[1242]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:25:43.124646 extend-filesystems[1242]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:25:43.128572 env[1213]: time="2025-05-13T00:25:43.124506277Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:25:43.128768 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 13 00:25:43.126646 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:25:43.128863 extend-filesystems[1194]: Resized filesystem in /dev/vda9 May 13 00:25:43.126792 systemd[1]: Finished extend-filesystems.service. May 13 00:25:43.127986 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:25:43.157726 env[1213]: time="2025-05-13T00:25:43.157674601Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:25:43.158053 env[1213]: time="2025-05-13T00:25:43.158033802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.159560 env[1213]: time="2025-05-13T00:25:43.159423445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:25:43.159560 env[1213]: time="2025-05-13T00:25:43.159451680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.159729 env[1213]: time="2025-05-13T00:25:43.159707097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:25:43.159759 env[1213]: time="2025-05-13T00:25:43.159728502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.159759 env[1213]: time="2025-05-13T00:25:43.159741323Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:25:43.159799 env[1213]: time="2025-05-13T00:25:43.159749908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.159866 env[1213]: time="2025-05-13T00:25:43.159851555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.160103 env[1213]: time="2025-05-13T00:25:43.160087551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:25:43.160245 env[1213]: time="2025-05-13T00:25:43.160227927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:25:43.160266 env[1213]: time="2025-05-13T00:25:43.160246699Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:25:43.160321 env[1213]: time="2025-05-13T00:25:43.160307787Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:25:43.160346 env[1213]: time="2025-05-13T00:25:43.160323965Z" level=info msg="metadata content store policy set" policy=shared May 13 00:25:43.166136 env[1213]: time="2025-05-13T00:25:43.166067900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:25:43.166136 env[1213]: time="2025-05-13T00:25:43.166107582Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:25:43.166136 env[1213]: time="2025-05-13T00:25:43.166128606Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:25:43.166252 env[1213]: time="2025-05-13T00:25:43.166171188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166252 env[1213]: time="2025-05-13T00:25:43.166186680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166252 env[1213]: time="2025-05-13T00:25:43.166199386Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166252 env[1213]: time="2025-05-13T00:25:43.166210832Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166567 env[1213]: time="2025-05-13T00:25:43.166546987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166622 env[1213]: time="2025-05-13T00:25:43.166568851Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166622 env[1213]: time="2025-05-13T00:25:43.166582434Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166622 env[1213]: time="2025-05-13T00:25:43.166594911Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:25:43.166622 env[1213]: time="2025-05-13T00:25:43.166607007Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:25:43.166731 env[1213]: time="2025-05-13T00:25:43.166712126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:25:43.166801 env[1213]: time="2025-05-13T00:25:43.166785539Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:25:43.167007 env[1213]: time="2025-05-13T00:25:43.166991391Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:25:43.167049 env[1213]: time="2025-05-13T00:25:43.167032523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167071 env[1213]: time="2025-05-13T00:25:43.167052326Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:25:43.167176 env[1213]: time="2025-05-13T00:25:43.167162291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167230 env[1213]: time="2025-05-13T00:25:43.167179118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167230 env[1213]: time="2025-05-13T00:25:43.167191862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167230 env[1213]: time="2025-05-13T00:25:43.167203424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167230 env[1213]: time="2025-05-13T00:25:43.167214871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167230 env[1213]: time="2025-05-13T00:25:43.167226012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167343 env[1213]: time="2025-05-13T00:25:43.167242572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167343 env[1213]: time="2025-05-13T00:25:43.167264397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167343 env[1213]: time="2025-05-13T00:25:43.167277904Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:25:43.167476 env[1213]: time="2025-05-13T00:25:43.167460557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167503 env[1213]: time="2025-05-13T00:25:43.167488449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167538 env[1213]: time="2025-05-13T00:25:43.167501193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167538 env[1213]: time="2025-05-13T00:25:43.167512488Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:25:43.167730 env[1213]: time="2025-05-13T00:25:43.167539426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:25:43.167730 env[1213]: time="2025-05-13T00:25:43.167551521Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:25:43.167730 env[1213]: time="2025-05-13T00:25:43.167567356Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:25:43.167730 env[1213]: time="2025-05-13T00:25:43.167612647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:25:43.167838 env[1213]: time="2025-05-13T00:25:43.167792324Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.167846048Z" level=info msg="Connect containerd service" May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.167877221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.168755917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.168982068Z" level=info msg="Start subscribing containerd event" May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.169731643Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.169783726Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:25:43.171248 env[1213]: time="2025-05-13T00:25:43.169823179Z" level=info msg="containerd successfully booted in 0.055161s" May 13 00:25:43.170252 systemd[1]: Started containerd.service. May 13 00:25:43.170674 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:25:43.171635 env[1213]: time="2025-05-13T00:25:43.171408143Z" level=info msg="Start recovering state" May 13 00:25:43.171635 env[1213]: time="2025-05-13T00:25:43.171546535Z" level=info msg="Start event monitor" May 13 00:25:43.171635 env[1213]: time="2025-05-13T00:25:43.171570535Z" level=info msg="Start snapshots syncer" May 13 00:25:43.171635 env[1213]: time="2025-05-13T00:25:43.171584348Z" level=info msg="Start cni network conf syncer for default" May 13 00:25:43.171635 env[1213]: time="2025-05-13T00:25:43.171592666Z" level=info msg="Start streaming server" May 13 00:25:43.411991 tar[1211]: linux-arm64/LICENSE May 13 00:25:43.411991 tar[1211]: linux-arm64/README.md May 13 00:25:43.415977 systemd[1]: Finished prepare-helm.service. May 13 00:25:44.306739 systemd-networkd[1042]: eth0: Gained IPv6LL May 13 00:25:44.308548 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:25:44.309834 systemd[1]: Reached target network-online.target. May 13 00:25:44.312208 systemd[1]: Starting kubelet.service... May 13 00:25:44.399950 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:25:44.418455 systemd[1]: Finished sshd-keygen.service. May 13 00:25:44.421127 systemd[1]: Starting issuegen.service... May 13 00:25:44.425671 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:25:44.425842 systemd[1]: Finished issuegen.service. May 13 00:25:44.428349 systemd[1]: Starting systemd-user-sessions.service... May 13 00:25:44.435262 systemd[1]: Finished systemd-user-sessions.service. May 13 00:25:44.437857 systemd[1]: Started getty@tty1.service. May 13 00:25:44.440289 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:25:44.441477 systemd[1]: Reached target getty.target. May 13 00:25:44.818266 systemd[1]: Started kubelet.service. May 13 00:25:44.819773 systemd[1]: Reached target multi-user.target. May 13 00:25:44.822068 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:25:44.829305 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:25:44.829483 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:25:44.830710 systemd[1]: Startup finished in 580ms (kernel) + 4.131s (initrd) + 5.122s (userspace) = 9.835s. May 13 00:25:45.314556 kubelet[1273]: E0513 00:25:45.314425 1273 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:25:45.316619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:25:45.316743 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:25:48.479782 systemd[1]: Created slice system-sshd.slice. May 13 00:25:48.480828 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:42604.service. May 13 00:25:48.520878 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 42604 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:25:48.522972 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:48.539697 systemd[1]: Created slice user-500.slice. May 13 00:25:48.540858 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:25:48.542726 systemd-logind[1202]: New session 1 of user core. May 13 00:25:48.549066 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:25:48.550399 systemd[1]: Starting user@500.service... May 13 00:25:48.553416 (systemd)[1286]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:48.616973 systemd[1286]: Queued start job for default target default.target. May 13 00:25:48.617491 systemd[1286]: Reached target paths.target. May 13 00:25:48.617544 systemd[1286]: Reached target sockets.target. May 13 00:25:48.617556 systemd[1286]: Reached target timers.target. May 13 00:25:48.617566 systemd[1286]: Reached target basic.target. May 13 00:25:48.617609 systemd[1286]: Reached target default.target. May 13 00:25:48.617640 systemd[1286]: Startup finished in 58ms. May 13 00:25:48.617706 systemd[1]: Started user@500.service. May 13 00:25:48.618640 systemd[1]: Started session-1.scope. May 13 00:25:48.669977 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:42620.service. May 13 00:25:48.723365 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 42620 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:25:48.724666 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:48.728561 systemd-logind[1202]: New session 2 of user core. May 13 00:25:48.729319 systemd[1]: Started session-2.scope. May 13 00:25:48.781504 sshd[1295]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.784779 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:42636.service. May 13 00:25:48.785221 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:42620.service: Deactivated successfully. May 13 00:25:48.785902 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:25:48.786386 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. May 13 00:25:48.787046 systemd-logind[1202]: Removed session 2. May 13 00:25:48.820242 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 42636 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:25:48.821379 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:48.825573 systemd[1]: Started session-3.scope. May 13 00:25:48.825711 systemd-logind[1202]: New session 3 of user core. May 13 00:25:48.874084 sshd[1300]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.876755 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:42636.service: Deactivated successfully. May 13 00:25:48.877365 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:25:48.878825 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. May 13 00:25:48.879825 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:42642.service. May 13 00:25:48.881212 systemd-logind[1202]: Removed session 3. May 13 00:25:48.914769 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 42642 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:25:48.916345 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:48.919854 systemd-logind[1202]: New session 4 of user core. May 13 00:25:48.920707 systemd[1]: Started session-4.scope. May 13 00:25:48.976821 sshd[1307]: pam_unix(sshd:session): session closed for user core May 13 00:25:48.981199 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:42642.service: Deactivated successfully. May 13 00:25:48.982135 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:25:48.982745 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. May 13 00:25:48.984283 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:42658.service. May 13 00:25:48.985127 systemd-logind[1202]: Removed session 4. May 13 00:25:49.019833 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 42658 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:25:49.021040 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:25:49.024512 systemd-logind[1202]: New session 5 of user core. May 13 00:25:49.025349 systemd[1]: Started session-5.scope. May 13 00:25:49.093076 sudo[1317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:25:49.093307 sudo[1317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:25:49.171803 systemd[1]: Starting docker.service... May 13 00:25:49.257270 env[1329]: time="2025-05-13T00:25:49.257215359Z" level=info msg="Starting up" May 13 00:25:49.259051 env[1329]: time="2025-05-13T00:25:49.259023911Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:25:49.259137 env[1329]: time="2025-05-13T00:25:49.259123722Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:25:49.259202 env[1329]: time="2025-05-13T00:25:49.259186673Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:25:49.259268 env[1329]: time="2025-05-13T00:25:49.259255734Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:25:49.262535 env[1329]: time="2025-05-13T00:25:49.262491416Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:25:49.262626 env[1329]: time="2025-05-13T00:25:49.262612224Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:25:49.262682 env[1329]: time="2025-05-13T00:25:49.262668202Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:25:49.262731 env[1329]: time="2025-05-13T00:25:49.262718812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:25:49.378649 env[1329]: time="2025-05-13T00:25:49.378555637Z" level=info msg="Loading containers: start." May 13 00:25:49.507542 kernel: Initializing XFRM netlink socket May 13 00:25:49.529541 env[1329]: time="2025-05-13T00:25:49.529479296Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:25:49.584919 systemd-networkd[1042]: docker0: Link UP May 13 00:25:49.607753 env[1329]: time="2025-05-13T00:25:49.607707248Z" level=info msg="Loading containers: done." May 13 00:25:49.629474 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3210270276-merged.mount: Deactivated successfully. May 13 00:25:49.630628 env[1329]: time="2025-05-13T00:25:49.630583797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:25:49.630841 env[1329]: time="2025-05-13T00:25:49.630808255Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:25:49.630981 env[1329]: time="2025-05-13T00:25:49.630952605Z" level=info msg="Daemon has completed initialization" May 13 00:25:49.654026 systemd[1]: Started docker.service. May 13 00:25:49.659482 env[1329]: time="2025-05-13T00:25:49.659368607Z" level=info msg="API listen on /run/docker.sock" May 13 00:25:50.289594 env[1213]: time="2025-05-13T00:25:50.289541854Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:25:50.880535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170531205.mount: Deactivated successfully. May 13 00:25:52.374852 env[1213]: time="2025-05-13T00:25:52.374803305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:52.376234 env[1213]: time="2025-05-13T00:25:52.376199758Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:52.378391 env[1213]: time="2025-05-13T00:25:52.378361969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:52.380552 env[1213]: time="2025-05-13T00:25:52.380509308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:52.381475 env[1213]: time="2025-05-13T00:25:52.381447364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:25:52.389748 env[1213]: time="2025-05-13T00:25:52.389696780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:25:53.990403 env[1213]: time="2025-05-13T00:25:53.990357867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:53.991963 env[1213]: time="2025-05-13T00:25:53.991937396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:53.994028 env[1213]: time="2025-05-13T00:25:53.993997070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:53.995874 env[1213]: time="2025-05-13T00:25:53.995850635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:53.996533 env[1213]: time="2025-05-13T00:25:53.996473509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:25:54.005477 env[1213]: time="2025-05-13T00:25:54.005448684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:25:55.095395 env[1213]: time="2025-05-13T00:25:55.095342872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:55.097483 env[1213]: time="2025-05-13T00:25:55.097443743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:55.099372 env[1213]: time="2025-05-13T00:25:55.099340962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:55.101381 env[1213]: time="2025-05-13T00:25:55.101346803Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:55.103011 env[1213]: time="2025-05-13T00:25:55.102976172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:25:55.114190 env[1213]: time="2025-05-13T00:25:55.114159877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:25:55.552376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:25:55.552588 systemd[1]: Stopped kubelet.service. May 13 00:25:55.553928 systemd[1]: Starting kubelet.service... May 13 00:25:55.655308 systemd[1]: Started kubelet.service. May 13 00:25:55.705361 kubelet[1491]: E0513 00:25:55.705308 1491 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:25:55.707594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:25:55.707720 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:25:56.646634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181943981.mount: Deactivated successfully. May 13 00:25:57.064170 env[1213]: time="2025-05-13T00:25:57.064126952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:57.065380 env[1213]: time="2025-05-13T00:25:57.065351583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:57.066663 env[1213]: time="2025-05-13T00:25:57.066638847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:57.067650 env[1213]: time="2025-05-13T00:25:57.067627568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:57.068153 env[1213]: time="2025-05-13T00:25:57.068127985Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:25:57.077211 env[1213]: time="2025-05-13T00:25:57.077186289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:25:57.655927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241494750.mount: Deactivated successfully. May 13 00:25:58.528170 env[1213]: time="2025-05-13T00:25:58.528110568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:58.529890 env[1213]: time="2025-05-13T00:25:58.529850318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:58.532123 env[1213]: time="2025-05-13T00:25:58.532089344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:58.533985 env[1213]: time="2025-05-13T00:25:58.533953476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:58.534799 env[1213]: time="2025-05-13T00:25:58.534763207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:25:58.544884 env[1213]: time="2025-05-13T00:25:58.544837589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:25:58.994701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864019630.mount: Deactivated successfully. May 13 00:25:58.998553 env[1213]: time="2025-05-13T00:25:58.998491542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:59.000056 env[1213]: time="2025-05-13T00:25:59.000014449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:59.001947 env[1213]: time="2025-05-13T00:25:59.001910149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:59.003081 env[1213]: time="2025-05-13T00:25:59.003047564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:25:59.004379 env[1213]: time="2025-05-13T00:25:59.004325173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:25:59.014946 env[1213]: time="2025-05-13T00:25:59.014909649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:25:59.481092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229675149.mount: Deactivated successfully. May 13 00:26:01.551014 env[1213]: time="2025-05-13T00:26:01.550952786Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:01.552594 env[1213]: time="2025-05-13T00:26:01.552564999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:01.554790 env[1213]: time="2025-05-13T00:26:01.554756150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:01.556886 env[1213]: time="2025-05-13T00:26:01.556853334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:01.557772 env[1213]: time="2025-05-13T00:26:01.557738829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:26:05.802386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:26:05.802585 systemd[1]: Stopped kubelet.service. May 13 00:26:05.804047 systemd[1]: Starting kubelet.service... May 13 00:26:05.883865 systemd[1]: Started kubelet.service. May 13 00:26:05.922945 kubelet[1597]: E0513 00:26:05.922889 1597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:26:05.924891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:26:05.925016 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:26:06.829551 systemd[1]: Stopped kubelet.service. May 13 00:26:06.831469 systemd[1]: Starting kubelet.service... May 13 00:26:06.845922 systemd[1]: Reloading. May 13 00:26:06.887992 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2025-05-13T00:26:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:26:06.888021 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2025-05-13T00:26:06Z" level=info msg="torcx already run" May 13 00:26:06.953916 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:26:06.953936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:26:06.969368 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:26:07.034352 systemd[1]: Started kubelet.service. May 13 00:26:07.036294 systemd[1]: Stopping kubelet.service... May 13 00:26:07.036818 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:26:07.036995 systemd[1]: Stopped kubelet.service. May 13 00:26:07.038648 systemd[1]: Starting kubelet.service... May 13 00:26:07.118000 systemd[1]: Started kubelet.service. May 13 00:26:07.152535 kubelet[1677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:26:07.152535 kubelet[1677]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:26:07.152535 kubelet[1677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:26:07.152871 kubelet[1677]: I0513 00:26:07.152654 1677 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:26:08.050731 kubelet[1677]: I0513 00:26:08.050699 1677 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:26:08.050731 kubelet[1677]: I0513 00:26:08.050725 1677 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:26:08.050955 kubelet[1677]: I0513 00:26:08.050942 1677 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:26:08.081239 kubelet[1677]: I0513 00:26:08.081205 1677 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:26:08.081239 kubelet[1677]: E0513 00:26:08.081210 1677 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.092793 kubelet[1677]: I0513 00:26:08.092762 1677 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:26:08.093989 kubelet[1677]: I0513 00:26:08.093949 1677 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:26:08.094147 kubelet[1677]: I0513 00:26:08.093990 1677 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:26:08.094228 kubelet[1677]: I0513 00:26:08.094203 1677 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:26:08.094228 kubelet[1677]: I0513 00:26:08.094213 1677 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:26:08.094472 kubelet[1677]: I0513 00:26:08.094451 1677 state_mem.go:36] "Initialized new in-memory state store" May 13 00:26:08.095453 kubelet[1677]: I0513 00:26:08.095433 1677 kubelet.go:400] "Attempting to sync node with API server" May 13 00:26:08.095453 kubelet[1677]: I0513 00:26:08.095453 1677 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:26:08.096105 kubelet[1677]: I0513 00:26:08.096090 1677 kubelet.go:312] "Adding apiserver pod source" May 13 00:26:08.096166 kubelet[1677]: W0513 00:26:08.096123 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.096200 kubelet[1677]: I0513 00:26:08.096172 1677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:26:08.096200 kubelet[1677]: E0513 00:26:08.096179 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.096667 kubelet[1677]: W0513 00:26:08.096633 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.096775 kubelet[1677]: E0513 00:26:08.096762 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.097226 kubelet[1677]: I0513 00:26:08.097210 1677 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:26:08.097566 kubelet[1677]: I0513 00:26:08.097552 1677 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:26:08.097661 kubelet[1677]: W0513 00:26:08.097651 1677 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:26:08.098370 kubelet[1677]: I0513 00:26:08.098355 1677 server.go:1264] "Started kubelet" May 13 00:26:08.099393 kubelet[1677]: I0513 00:26:08.099340 1677 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:26:08.101059 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:26:08.101300 kubelet[1677]: I0513 00:26:08.101273 1677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:26:08.103659 kubelet[1677]: I0513 00:26:08.103631 1677 server.go:455] "Adding debug handlers to kubelet server" May 13 00:26:08.103841 kubelet[1677]: E0513 00:26:08.103675 1677 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee86c704be00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:26:08.098336256 +0000 UTC m=+0.977457423,LastTimestamp:2025-05-13 00:26:08.098336256 +0000 UTC m=+0.977457423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:26:08.104533 kubelet[1677]: I0513 00:26:08.104460 1677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:26:08.104684 kubelet[1677]: I0513 00:26:08.104667 1677 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:26:08.107670 kubelet[1677]: E0513 00:26:08.107639 1677 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:26:08.107809 kubelet[1677]: I0513 00:26:08.107782 1677 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:26:08.107899 kubelet[1677]: I0513 00:26:08.107885 1677 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:26:08.109894 kubelet[1677]: W0513 00:26:08.109847 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.109970 kubelet[1677]: E0513 00:26:08.109898 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.111179 kubelet[1677]: I0513 00:26:08.111152 1677 reconciler.go:26] "Reconciler: start to sync state" May 13 00:26:08.111350 kubelet[1677]: I0513 00:26:08.111326 1677 factory.go:221] Registration of the systemd container factory successfully May 13 00:26:08.111498 kubelet[1677]: I0513 00:26:08.111479 1677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:26:08.112969 kubelet[1677]: E0513 00:26:08.112940 1677 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:26:08.113150 kubelet[1677]: I0513 00:26:08.113132 1677 factory.go:221] Registration of the containerd container factory successfully May 13 00:26:08.113620 kubelet[1677]: E0513 00:26:08.113588 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" May 13 00:26:08.122380 kubelet[1677]: I0513 00:26:08.122332 1677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:26:08.123438 kubelet[1677]: I0513 00:26:08.123410 1677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:26:08.123586 kubelet[1677]: I0513 00:26:08.123571 1677 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:26:08.123625 kubelet[1677]: I0513 00:26:08.123591 1677 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:26:08.123653 kubelet[1677]: E0513 00:26:08.123628 1677 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:26:08.125691 kubelet[1677]: I0513 00:26:08.125670 1677 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:26:08.125691 kubelet[1677]: I0513 00:26:08.125687 1677 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:26:08.125802 kubelet[1677]: I0513 00:26:08.125707 1677 state_mem.go:36] "Initialized new in-memory state store" May 13 00:26:08.125802 kubelet[1677]: W0513 00:26:08.125732 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.125802 kubelet[1677]: E0513 00:26:08.125768 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:08.209412 kubelet[1677]: I0513 00:26:08.209382 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:08.213510 kubelet[1677]: E0513 00:26:08.211511 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:26:08.224681 kubelet[1677]: I0513 00:26:08.224656 1677 policy_none.go:49] "None policy: Start" May 13 00:26:08.224746 kubelet[1677]: E0513 00:26:08.224685 1677 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:26:08.225234 kubelet[1677]: I0513 00:26:08.225210 1677 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:26:08.225234 kubelet[1677]: I0513 00:26:08.225234 1677 state_mem.go:35] "Initializing new in-memory state store" May 13 00:26:08.230212 systemd[1]: Created slice kubepods.slice. May 13 00:26:08.235519 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:26:08.240463 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:26:08.249146 kubelet[1677]: I0513 00:26:08.249125 1677 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:26:08.249323 kubelet[1677]: I0513 00:26:08.249245 1677 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:26:08.249368 kubelet[1677]: I0513 00:26:08.249344 1677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:26:08.251516 kubelet[1677]: E0513 00:26:08.251498 1677 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:26:08.314159 kubelet[1677]: E0513 00:26:08.314107 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" May 13 00:26:08.413504 kubelet[1677]: I0513 00:26:08.413485 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:08.413905 kubelet[1677]: E0513 00:26:08.413882 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:26:08.425100 kubelet[1677]: I0513 00:26:08.425071 1677 topology_manager.go:215] "Topology Admit Handler" podUID="f139ec8d494e1d65eecf17bed864d049" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:26:08.426103 kubelet[1677]: I0513 00:26:08.426072 1677 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:26:08.426843 kubelet[1677]: I0513 00:26:08.426822 1677 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:26:08.432539 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 00:26:08.449094 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 00:26:08.463813 systemd[1]: Created slice kubepods-burstable-podf139ec8d494e1d65eecf17bed864d049.slice. May 13 00:26:08.513464 kubelet[1677]: I0513 00:26:08.513417 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:26:08.513464 kubelet[1677]: I0513 00:26:08.513452 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:08.513659 kubelet[1677]: I0513 00:26:08.513474 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:08.513659 kubelet[1677]: I0513 00:26:08.513491 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:08.513659 kubelet[1677]: I0513 00:26:08.513507 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:08.513659 kubelet[1677]: I0513 00:26:08.513521 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:08.513659 kubelet[1677]: I0513 00:26:08.513551 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:08.513851 kubelet[1677]: I0513 00:26:08.513566 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:08.513851 kubelet[1677]: I0513 00:26:08.513580 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:08.715484 kubelet[1677]: E0513 00:26:08.714941 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" May 13 00:26:08.747493 kubelet[1677]: E0513 00:26:08.747306 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:08.748326 env[1213]: time="2025-05-13T00:26:08.748043766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:26:08.763365 kubelet[1677]: E0513 00:26:08.763314 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:08.764051 env[1213]: time="2025-05-13T00:26:08.763807368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:26:08.765672 kubelet[1677]: E0513 00:26:08.765653 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:08.766016 env[1213]: time="2025-05-13T00:26:08.765992267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f139ec8d494e1d65eecf17bed864d049,Namespace:kube-system,Attempt:0,}" May 13 00:26:08.815593 kubelet[1677]: I0513 00:26:08.815568 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:08.815927 kubelet[1677]: E0513 00:26:08.815902 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:26:09.199088 kubelet[1677]: W0513 00:26:09.199013 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.199088 kubelet[1677]: E0513 00:26:09.199084 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.337994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462836815.mount: Deactivated successfully. May 13 00:26:09.341485 env[1213]: time="2025-05-13T00:26:09.341430856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.343935 env[1213]: time="2025-05-13T00:26:09.343881263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.344655 env[1213]: time="2025-05-13T00:26:09.344627952Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.346622 env[1213]: time="2025-05-13T00:26:09.346594931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.347920 env[1213]: time="2025-05-13T00:26:09.347892031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.349200 env[1213]: time="2025-05-13T00:26:09.349172715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.350661 env[1213]: time="2025-05-13T00:26:09.350634539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.352209 env[1213]: time="2025-05-13T00:26:09.352180003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.354945 env[1213]: time="2025-05-13T00:26:09.354885563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.355588 env[1213]: time="2025-05-13T00:26:09.355562633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.357340 env[1213]: time="2025-05-13T00:26:09.357311285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.358034 env[1213]: time="2025-05-13T00:26:09.358008965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:09.382003 env[1213]: time="2025-05-13T00:26:09.381936095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:09.382127 env[1213]: time="2025-05-13T00:26:09.381977556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:09.382127 env[1213]: time="2025-05-13T00:26:09.381989938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:09.382416 env[1213]: time="2025-05-13T00:26:09.382363443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a751577e44f25c00d3164cb71f6ba8f6d66e844c04cfbc61ee2338b2cd88e2ab pid=1726 runtime=io.containerd.runc.v2 May 13 00:26:09.384996 env[1213]: time="2025-05-13T00:26:09.384930721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:09.384996 env[1213]: time="2025-05-13T00:26:09.384967469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:09.384996 env[1213]: time="2025-05-13T00:26:09.384979052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:09.385312 env[1213]: time="2025-05-13T00:26:09.385129197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/730d67e788fe212268927b2914da7e3eab3378e577da366687d5acf2ee89a766 pid=1746 runtime=io.containerd.runc.v2 May 13 00:26:09.385995 env[1213]: time="2025-05-13T00:26:09.385935880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:09.385995 env[1213]: time="2025-05-13T00:26:09.385973905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:09.385995 env[1213]: time="2025-05-13T00:26:09.385984171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:09.386221 env[1213]: time="2025-05-13T00:26:09.386168866Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ad8a3e4308f0e2d5af009260775ea11858fa0e5d1bd0e4f9c842774e76904c5 pid=1727 runtime=io.containerd.runc.v2 May 13 00:26:09.396719 systemd[1]: Started cri-containerd-a751577e44f25c00d3164cb71f6ba8f6d66e844c04cfbc61ee2338b2cd88e2ab.scope. May 13 00:26:09.401864 systemd[1]: Started cri-containerd-730d67e788fe212268927b2914da7e3eab3378e577da366687d5acf2ee89a766.scope. May 13 00:26:09.407305 systemd[1]: Started cri-containerd-4ad8a3e4308f0e2d5af009260775ea11858fa0e5d1bd0e4f9c842774e76904c5.scope. May 13 00:26:09.409706 kubelet[1677]: W0513 00:26:09.408259 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.409706 kubelet[1677]: E0513 00:26:09.408326 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.466865 env[1213]: time="2025-05-13T00:26:09.466144029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"730d67e788fe212268927b2914da7e3eab3378e577da366687d5acf2ee89a766\"" May 13 00:26:09.467330 kubelet[1677]: E0513 00:26:09.467304 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:09.469495 env[1213]: time="2025-05-13T00:26:09.469456878Z" level=info msg="CreateContainer within sandbox \"730d67e788fe212268927b2914da7e3eab3378e577da366687d5acf2ee89a766\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:26:09.479180 env[1213]: time="2025-05-13T00:26:09.479150538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad8a3e4308f0e2d5af009260775ea11858fa0e5d1bd0e4f9c842774e76904c5\"" May 13 00:26:09.480107 kubelet[1677]: E0513 00:26:09.480084 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:09.481872 env[1213]: time="2025-05-13T00:26:09.481845913Z" level=info msg="CreateContainer within sandbox \"4ad8a3e4308f0e2d5af009260775ea11858fa0e5d1bd0e4f9c842774e76904c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:26:09.482147 env[1213]: time="2025-05-13T00:26:09.482053256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f139ec8d494e1d65eecf17bed864d049,Namespace:kube-system,Attempt:0,} returns sandbox id \"a751577e44f25c00d3164cb71f6ba8f6d66e844c04cfbc61ee2338b2cd88e2ab\"" May 13 00:26:09.483204 kubelet[1677]: E0513 00:26:09.483183 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:09.484086 env[1213]: time="2025-05-13T00:26:09.484047916Z" level=info msg="CreateContainer within sandbox \"730d67e788fe212268927b2914da7e3eab3378e577da366687d5acf2ee89a766\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6263184c595ee27b8cc15c55440cdf4f0bcc6f33341eddae8d8ba3517edfe59\"" May 13 00:26:09.484644 env[1213]: time="2025-05-13T00:26:09.484593574Z" level=info msg="StartContainer for \"e6263184c595ee27b8cc15c55440cdf4f0bcc6f33341eddae8d8ba3517edfe59\"" May 13 00:26:09.484993 env[1213]: time="2025-05-13T00:26:09.484971991Z" level=info msg="CreateContainer within sandbox \"a751577e44f25c00d3164cb71f6ba8f6d66e844c04cfbc61ee2338b2cd88e2ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:26:09.499959 env[1213]: time="2025-05-13T00:26:09.499915723Z" level=info msg="CreateContainer within sandbox \"4ad8a3e4308f0e2d5af009260775ea11858fa0e5d1bd0e4f9c842774e76904c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39a36df27757ed02e9daf5e9e74000770434edc94c8989f9c10b62e81cb9ac13\"" May 13 00:26:09.500608 env[1213]: time="2025-05-13T00:26:09.500575657Z" level=info msg="StartContainer for \"39a36df27757ed02e9daf5e9e74000770434edc94c8989f9c10b62e81cb9ac13\"" May 13 00:26:09.500982 env[1213]: time="2025-05-13T00:26:09.500577015Z" level=info msg="CreateContainer within sandbox \"a751577e44f25c00d3164cb71f6ba8f6d66e844c04cfbc61ee2338b2cd88e2ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a63286a7fe7a7682ff8b26c20bec96d877a168c0b2f5565238645507bce38c75\"" May 13 00:26:09.501308 env[1213]: time="2025-05-13T00:26:09.501286597Z" level=info msg="StartContainer for \"a63286a7fe7a7682ff8b26c20bec96d877a168c0b2f5565238645507bce38c75\"" May 13 00:26:09.506016 systemd[1]: Started cri-containerd-e6263184c595ee27b8cc15c55440cdf4f0bcc6f33341eddae8d8ba3517edfe59.scope. May 13 00:26:09.513053 kubelet[1677]: W0513 00:26:09.512991 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.513164 kubelet[1677]: E0513 00:26:09.513073 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.515542 kubelet[1677]: E0513 00:26:09.515491 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" May 13 00:26:09.518117 systemd[1]: Started cri-containerd-a63286a7fe7a7682ff8b26c20bec96d877a168c0b2f5565238645507bce38c75.scope. May 13 00:26:09.522080 systemd[1]: Started cri-containerd-39a36df27757ed02e9daf5e9e74000770434edc94c8989f9c10b62e81cb9ac13.scope. May 13 00:26:09.594127 env[1213]: time="2025-05-13T00:26:09.594081657Z" level=info msg="StartContainer for \"e6263184c595ee27b8cc15c55440cdf4f0bcc6f33341eddae8d8ba3517edfe59\" returns successfully" May 13 00:26:09.609856 env[1213]: time="2025-05-13T00:26:09.609811662Z" level=info msg="StartContainer for \"39a36df27757ed02e9daf5e9e74000770434edc94c8989f9c10b62e81cb9ac13\" returns successfully" May 13 00:26:09.615998 env[1213]: time="2025-05-13T00:26:09.615963880Z" level=info msg="StartContainer for \"a63286a7fe7a7682ff8b26c20bec96d877a168c0b2f5565238645507bce38c75\" returns successfully" May 13 00:26:09.624573 kubelet[1677]: I0513 00:26:09.623853 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:09.624573 kubelet[1677]: E0513 00:26:09.624202 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:26:09.634491 kubelet[1677]: W0513 00:26:09.634414 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:09.634491 kubelet[1677]: E0513 00:26:09.634474 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:26:10.130769 kubelet[1677]: E0513 00:26:10.130739 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:10.132809 kubelet[1677]: E0513 00:26:10.132787 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:10.134898 kubelet[1677]: E0513 00:26:10.134873 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:11.129159 kubelet[1677]: E0513 00:26:11.129120 1677 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:26:11.136519 kubelet[1677]: E0513 00:26:11.136494 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:11.225387 kubelet[1677]: I0513 00:26:11.225344 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:11.237342 kubelet[1677]: I0513 00:26:11.237304 1677 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:26:11.245309 kubelet[1677]: E0513 00:26:11.245272 1677 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:26:11.345829 kubelet[1677]: E0513 00:26:11.345788 1677 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:26:11.821364 kubelet[1677]: E0513 00:26:11.821332 1677 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:26:11.822042 kubelet[1677]: E0513 00:26:11.822017 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:12.098836 kubelet[1677]: I0513 00:26:12.098731 1677 apiserver.go:52] "Watching apiserver" May 13 00:26:12.108674 kubelet[1677]: I0513 00:26:12.108646 1677 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:26:13.067721 systemd[1]: Reloading. May 13 00:26:13.126616 /usr/lib/systemd/system-generators/torcx-generator[1979]: time="2025-05-13T00:26:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:26:13.127013 /usr/lib/systemd/system-generators/torcx-generator[1979]: time="2025-05-13T00:26:13Z" level=info msg="torcx already run" May 13 00:26:13.190958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:26:13.191140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:26:13.207592 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:26:13.311869 systemd[1]: Stopping kubelet.service... May 13 00:26:13.328875 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:26:13.329252 systemd[1]: Stopped kubelet.service. May 13 00:26:13.329314 systemd[1]: kubelet.service: Consumed 1.322s CPU time. May 13 00:26:13.330987 systemd[1]: Starting kubelet.service... May 13 00:26:13.415828 systemd[1]: Started kubelet.service. May 13 00:26:13.461113 kubelet[2022]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:26:13.461113 kubelet[2022]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:26:13.461113 kubelet[2022]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:26:13.461655 kubelet[2022]: I0513 00:26:13.461614 2022 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:26:13.466101 kubelet[2022]: I0513 00:26:13.466047 2022 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:26:13.466101 kubelet[2022]: I0513 00:26:13.466075 2022 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:26:13.466481 kubelet[2022]: I0513 00:26:13.466246 2022 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:26:13.469068 kubelet[2022]: I0513 00:26:13.469049 2022 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:26:13.470604 kubelet[2022]: I0513 00:26:13.470578 2022 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:26:13.476378 kubelet[2022]: I0513 00:26:13.476353 2022 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:26:13.477311 kubelet[2022]: I0513 00:26:13.477262 2022 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:26:13.477702 kubelet[2022]: I0513 00:26:13.477441 2022 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:26:13.477858 kubelet[2022]: I0513 00:26:13.477844 2022 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:26:13.477928 kubelet[2022]: I0513 00:26:13.477919 2022 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:26:13.478075 kubelet[2022]: I0513 00:26:13.478056 2022 state_mem.go:36] "Initialized new in-memory state store" May 13 00:26:13.478309 kubelet[2022]: I0513 00:26:13.478283 2022 kubelet.go:400] "Attempting to sync node with API server" May 13 00:26:13.478414 kubelet[2022]: I0513 00:26:13.478402 2022 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:26:13.478578 kubelet[2022]: I0513 00:26:13.478564 2022 kubelet.go:312] "Adding apiserver pod source" May 13 00:26:13.482017 kubelet[2022]: I0513 00:26:13.481992 2022 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:26:13.483300 kubelet[2022]: I0513 00:26:13.483277 2022 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:26:13.483633 kubelet[2022]: I0513 00:26:13.483621 2022 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:26:13.484605 kubelet[2022]: I0513 00:26:13.484584 2022 server.go:1264] "Started kubelet" May 13 00:26:13.485223 kubelet[2022]: I0513 00:26:13.485195 2022 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:26:13.485511 kubelet[2022]: I0513 00:26:13.485467 2022 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:26:13.486850 kubelet[2022]: I0513 00:26:13.486827 2022 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:26:13.490807 kubelet[2022]: I0513 00:26:13.488730 2022 server.go:455] "Adding debug handlers to kubelet server" May 13 00:26:13.491292 kubelet[2022]: E0513 00:26:13.491267 2022 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:26:13.491593 kubelet[2022]: I0513 00:26:13.489078 2022 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:26:13.491713 kubelet[2022]: I0513 00:26:13.491695 2022 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:26:13.491802 kubelet[2022]: I0513 00:26:13.491789 2022 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:26:13.491922 kubelet[2022]: I0513 00:26:13.491909 2022 reconciler.go:26] "Reconciler: start to sync state" May 13 00:26:13.498274 kubelet[2022]: I0513 00:26:13.498245 2022 factory.go:221] Registration of the systemd container factory successfully May 13 00:26:13.498496 kubelet[2022]: I0513 00:26:13.498471 2022 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:26:13.502374 kubelet[2022]: I0513 00:26:13.502299 2022 factory.go:221] Registration of the containerd container factory successfully May 13 00:26:13.515085 kubelet[2022]: I0513 00:26:13.515024 2022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:26:13.518202 kubelet[2022]: I0513 00:26:13.516139 2022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:26:13.518202 kubelet[2022]: I0513 00:26:13.516178 2022 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:26:13.518202 kubelet[2022]: I0513 00:26:13.516198 2022 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:26:13.518202 kubelet[2022]: E0513 00:26:13.516238 2022 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:26:13.542834 kubelet[2022]: I0513 00:26:13.542809 2022 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:26:13.543101 kubelet[2022]: I0513 00:26:13.543077 2022 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:26:13.543177 kubelet[2022]: I0513 00:26:13.543167 2022 state_mem.go:36] "Initialized new in-memory state store" May 13 00:26:13.543381 kubelet[2022]: I0513 00:26:13.543366 2022 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:26:13.543472 kubelet[2022]: I0513 00:26:13.543443 2022 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:26:13.543549 kubelet[2022]: I0513 00:26:13.543517 2022 policy_none.go:49] "None policy: Start" May 13 00:26:13.544155 kubelet[2022]: I0513 00:26:13.544136 2022 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:26:13.544247 kubelet[2022]: I0513 00:26:13.544235 2022 state_mem.go:35] "Initializing new in-memory state store" May 13 00:26:13.544439 kubelet[2022]: I0513 00:26:13.544421 2022 state_mem.go:75] "Updated machine memory state" May 13 00:26:13.548063 kubelet[2022]: I0513 00:26:13.548041 2022 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:26:13.550403 kubelet[2022]: I0513 00:26:13.550364 2022 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:26:13.550615 kubelet[2022]: I0513 00:26:13.550599 2022 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:26:13.595681 kubelet[2022]: I0513 00:26:13.595586 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:26:13.603701 kubelet[2022]: I0513 00:26:13.603647 2022 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:26:13.603848 kubelet[2022]: I0513 00:26:13.603738 2022 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:26:13.616401 kubelet[2022]: I0513 00:26:13.616361 2022 topology_manager.go:215] "Topology Admit Handler" podUID="f139ec8d494e1d65eecf17bed864d049" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:26:13.616684 kubelet[2022]: I0513 00:26:13.616660 2022 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:26:13.616806 kubelet[2022]: I0513 00:26:13.616788 2022 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:26:13.693096 kubelet[2022]: I0513 00:26:13.693059 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:26:13.693292 kubelet[2022]: I0513 00:26:13.693272 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:13.693375 kubelet[2022]: I0513 00:26:13.693358 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:13.693460 kubelet[2022]: I0513 00:26:13.693447 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:13.693559 kubelet[2022]: I0513 00:26:13.693521 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:13.693670 kubelet[2022]: I0513 00:26:13.693655 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:13.693748 kubelet[2022]: I0513 00:26:13.693735 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f139ec8d494e1d65eecf17bed864d049-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f139ec8d494e1d65eecf17bed864d049\") " pod="kube-system/kube-apiserver-localhost" May 13 00:26:13.693845 kubelet[2022]: I0513 00:26:13.693831 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:13.693920 kubelet[2022]: I0513 00:26:13.693907 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:26:13.923587 kubelet[2022]: E0513 00:26:13.923416 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:13.924235 kubelet[2022]: E0513 00:26:13.923811 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:13.924235 kubelet[2022]: E0513 00:26:13.923812 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:14.482816 kubelet[2022]: I0513 00:26:14.482781 2022 apiserver.go:52] "Watching apiserver" May 13 00:26:14.492383 kubelet[2022]: I0513 00:26:14.492125 2022 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:26:14.529407 kubelet[2022]: E0513 00:26:14.529343 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:14.537832 kubelet[2022]: E0513 00:26:14.537776 2022 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:26:14.537832 kubelet[2022]: E0513 00:26:14.537782 2022 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:26:14.538544 kubelet[2022]: E0513 00:26:14.538231 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:14.538544 kubelet[2022]: E0513 00:26:14.538273 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:14.557775 kubelet[2022]: I0513 00:26:14.557701 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.557682947 podStartE2EDuration="1.557682947s" podCreationTimestamp="2025-05-13 00:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:14.548136723 +0000 UTC m=+1.128844709" watchObservedRunningTime="2025-05-13 00:26:14.557682947 +0000 UTC m=+1.138390973" May 13 00:26:14.567102 kubelet[2022]: I0513 00:26:14.566655 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.566636527 podStartE2EDuration="1.566636527s" podCreationTimestamp="2025-05-13 00:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:14.55786985 +0000 UTC m=+1.138577876" watchObservedRunningTime="2025-05-13 00:26:14.566636527 +0000 UTC m=+1.147344553" May 13 00:26:14.567102 kubelet[2022]: I0513 00:26:14.566739 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5667337350000001 podStartE2EDuration="1.566733735s" podCreationTimestamp="2025-05-13 00:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:14.566614103 +0000 UTC m=+1.147322129" watchObservedRunningTime="2025-05-13 00:26:14.566733735 +0000 UTC m=+1.147441761" May 13 00:26:15.313499 sudo[1317]: pam_unix(sudo:session): session closed for user root May 13 00:26:15.315875 sshd[1314]: pam_unix(sshd:session): session closed for user core May 13 00:26:15.319217 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. May 13 00:26:15.319478 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:42658.service: Deactivated successfully. May 13 00:26:15.320508 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:26:15.320695 systemd[1]: session-5.scope: Consumed 6.367s CPU time. May 13 00:26:15.321281 systemd-logind[1202]: Removed session 5. May 13 00:26:15.530885 kubelet[2022]: E0513 00:26:15.530853 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:15.531238 kubelet[2022]: E0513 00:26:15.530926 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:16.532071 kubelet[2022]: E0513 00:26:16.532042 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:17.042264 kubelet[2022]: E0513 00:26:17.042228 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:18.223757 kubelet[2022]: E0513 00:26:18.223720 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:25.862676 kubelet[2022]: E0513 00:26:25.862630 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:27.049307 kubelet[2022]: E0513 00:26:27.049271 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:28.007926 update_engine[1206]: I0513 00:26:28.007859 1206 update_attempter.cc:509] Updating boot flags... May 13 00:26:28.152266 kubelet[2022]: I0513 00:26:28.152078 2022 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:26:28.152645 kubelet[2022]: I0513 00:26:28.152610 2022 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:26:28.152684 env[1213]: time="2025-05-13T00:26:28.152446578Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:26:28.232348 kubelet[2022]: E0513 00:26:28.232295 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:28.932281 kubelet[2022]: I0513 00:26:28.931711 2022 topology_manager.go:215] "Topology Admit Handler" podUID="79db06ec-c908-4713-8477-5fa6f20abbd1" podNamespace="kube-system" podName="kube-proxy-qbq24" May 13 00:26:28.935902 kubelet[2022]: I0513 00:26:28.935825 2022 topology_manager.go:215] "Topology Admit Handler" podUID="a96b00d2-8b83-440f-9411-8df9c777e7f9" podNamespace="kube-flannel" podName="kube-flannel-ds-mfrm8" May 13 00:26:28.939046 systemd[1]: Created slice kubepods-besteffort-pod79db06ec_c908_4713_8477_5fa6f20abbd1.slice. May 13 00:26:28.951740 systemd[1]: Created slice kubepods-burstable-poda96b00d2_8b83_440f_9411_8df9c777e7f9.slice. May 13 00:26:29.103820 kubelet[2022]: I0513 00:26:29.103735 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79db06ec-c908-4713-8477-5fa6f20abbd1-lib-modules\") pod \"kube-proxy-qbq24\" (UID: \"79db06ec-c908-4713-8477-5fa6f20abbd1\") " pod="kube-system/kube-proxy-qbq24" May 13 00:26:29.103820 kubelet[2022]: I0513 00:26:29.103817 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a96b00d2-8b83-440f-9411-8df9c777e7f9-flannel-cfg\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.104008 kubelet[2022]: I0513 00:26:29.103844 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a96b00d2-8b83-440f-9411-8df9c777e7f9-run\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.104008 kubelet[2022]: I0513 00:26:29.103874 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7krb\" (UniqueName: \"kubernetes.io/projected/a96b00d2-8b83-440f-9411-8df9c777e7f9-kube-api-access-v7krb\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.104008 kubelet[2022]: I0513 00:26:29.103893 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79db06ec-c908-4713-8477-5fa6f20abbd1-xtables-lock\") pod \"kube-proxy-qbq24\" (UID: \"79db06ec-c908-4713-8477-5fa6f20abbd1\") " pod="kube-system/kube-proxy-qbq24" May 13 00:26:29.104008 kubelet[2022]: I0513 00:26:29.103924 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a96b00d2-8b83-440f-9411-8df9c777e7f9-cni-plugin\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.104008 kubelet[2022]: I0513 00:26:29.103949 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a96b00d2-8b83-440f-9411-8df9c777e7f9-xtables-lock\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.104133 kubelet[2022]: I0513 00:26:29.103971 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79db06ec-c908-4713-8477-5fa6f20abbd1-kube-proxy\") pod \"kube-proxy-qbq24\" (UID: \"79db06ec-c908-4713-8477-5fa6f20abbd1\") " pod="kube-system/kube-proxy-qbq24" May 13 00:26:29.104133 kubelet[2022]: I0513 00:26:29.103986 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmxk\" (UniqueName: \"kubernetes.io/projected/79db06ec-c908-4713-8477-5fa6f20abbd1-kube-api-access-tjmxk\") pod \"kube-proxy-qbq24\" (UID: \"79db06ec-c908-4713-8477-5fa6f20abbd1\") " pod="kube-system/kube-proxy-qbq24" May 13 00:26:29.104133 kubelet[2022]: I0513 00:26:29.104000 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a96b00d2-8b83-440f-9411-8df9c777e7f9-cni\") pod \"kube-flannel-ds-mfrm8\" (UID: \"a96b00d2-8b83-440f-9411-8df9c777e7f9\") " pod="kube-flannel/kube-flannel-ds-mfrm8" May 13 00:26:29.250223 kubelet[2022]: E0513 00:26:29.250112 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:29.251840 env[1213]: time="2025-05-13T00:26:29.251407746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbq24,Uid:79db06ec-c908-4713-8477-5fa6f20abbd1,Namespace:kube-system,Attempt:0,}" May 13 00:26:29.254475 kubelet[2022]: E0513 00:26:29.254143 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:29.254927 env[1213]: time="2025-05-13T00:26:29.254890537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mfrm8,Uid:a96b00d2-8b83-440f-9411-8df9c777e7f9,Namespace:kube-flannel,Attempt:0,}" May 13 00:26:29.268825 env[1213]: time="2025-05-13T00:26:29.268743652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:29.268825 env[1213]: time="2025-05-13T00:26:29.268793020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:29.269004 env[1213]: time="2025-05-13T00:26:29.268803702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:29.269004 env[1213]: time="2025-05-13T00:26:29.268921920Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbf002238b7d507c8b9a5805eb56a4c010b70323c5b63b0da3d68b2224dcc126 pid=2110 runtime=io.containerd.runc.v2 May 13 00:26:29.275379 env[1213]: time="2025-05-13T00:26:29.275288689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:29.275379 env[1213]: time="2025-05-13T00:26:29.275343098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:29.275379 env[1213]: time="2025-05-13T00:26:29.275355019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:29.275698 env[1213]: time="2025-05-13T00:26:29.275649946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7 pid=2125 runtime=io.containerd.runc.v2 May 13 00:26:29.284587 systemd[1]: Started cri-containerd-cbf002238b7d507c8b9a5805eb56a4c010b70323c5b63b0da3d68b2224dcc126.scope. May 13 00:26:29.307067 systemd[1]: Started cri-containerd-04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7.scope. May 13 00:26:29.343954 env[1213]: time="2025-05-13T00:26:29.343893357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbq24,Uid:79db06ec-c908-4713-8477-5fa6f20abbd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf002238b7d507c8b9a5805eb56a4c010b70323c5b63b0da3d68b2224dcc126\"" May 13 00:26:29.344645 kubelet[2022]: E0513 00:26:29.344621 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:29.347787 env[1213]: time="2025-05-13T00:26:29.347742887Z" level=info msg="CreateContainer within sandbox \"cbf002238b7d507c8b9a5805eb56a4c010b70323c5b63b0da3d68b2224dcc126\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:26:29.361572 env[1213]: time="2025-05-13T00:26:29.361485904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mfrm8,Uid:a96b00d2-8b83-440f-9411-8df9c777e7f9,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\"" May 13 00:26:29.362729 kubelet[2022]: E0513 00:26:29.362700 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:29.365622 env[1213]: time="2025-05-13T00:26:29.364736819Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 00:26:29.367490 env[1213]: time="2025-05-13T00:26:29.367326830Z" level=info msg="CreateContainer within sandbox \"cbf002238b7d507c8b9a5805eb56a4c010b70323c5b63b0da3d68b2224dcc126\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ffda00f8b01d93100c8a70dfaba8f23a80bfb0888cf483fb5423c0f31cbecd7\"" May 13 00:26:29.368642 env[1213]: time="2025-05-13T00:26:29.368582789Z" level=info msg="StartContainer for \"2ffda00f8b01d93100c8a70dfaba8f23a80bfb0888cf483fb5423c0f31cbecd7\"" May 13 00:26:29.388134 systemd[1]: Started cri-containerd-2ffda00f8b01d93100c8a70dfaba8f23a80bfb0888cf483fb5423c0f31cbecd7.scope. May 13 00:26:29.453733 env[1213]: time="2025-05-13T00:26:29.453690111Z" level=info msg="StartContainer for \"2ffda00f8b01d93100c8a70dfaba8f23a80bfb0888cf483fb5423c0f31cbecd7\" returns successfully" May 13 00:26:29.558183 kubelet[2022]: E0513 00:26:29.558116 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:30.541685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247958409.mount: Deactivated successfully. May 13 00:26:30.589521 env[1213]: time="2025-05-13T00:26:30.589463678Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:30.591232 env[1213]: time="2025-05-13T00:26:30.591194219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:30.592612 env[1213]: time="2025-05-13T00:26:30.592586709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:30.594178 env[1213]: time="2025-05-13T00:26:30.594131782Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:30.594690 env[1213]: time="2025-05-13T00:26:30.594660541Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 13 00:26:30.598103 env[1213]: time="2025-05-13T00:26:30.598062894Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 00:26:30.607402 env[1213]: time="2025-05-13T00:26:30.607366457Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626\"" May 13 00:26:30.607969 env[1213]: time="2025-05-13T00:26:30.607942544Z" level=info msg="StartContainer for \"0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626\"" May 13 00:26:30.621797 systemd[1]: Started cri-containerd-0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626.scope. May 13 00:26:30.652666 env[1213]: time="2025-05-13T00:26:30.652626159Z" level=info msg="StartContainer for \"0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626\" returns successfully" May 13 00:26:30.653088 systemd[1]: cri-containerd-0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626.scope: Deactivated successfully. May 13 00:26:30.690906 env[1213]: time="2025-05-13T00:26:30.690860003Z" level=info msg="shim disconnected" id=0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626 May 13 00:26:30.690906 env[1213]: time="2025-05-13T00:26:30.690907890Z" level=warning msg="cleaning up after shim disconnected" id=0f5191d6292036c31e3d086ba49c2510292f31df60f65dbf16d0647c54875626 namespace=k8s.io May 13 00:26:30.691120 env[1213]: time="2025-05-13T00:26:30.690918491Z" level=info msg="cleaning up dead shim" May 13 00:26:30.697073 env[1213]: time="2025-05-13T00:26:30.697039134Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:26:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2384 runtime=io.containerd.runc.v2\n" May 13 00:26:31.563035 kubelet[2022]: E0513 00:26:31.563007 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:31.564567 env[1213]: time="2025-05-13T00:26:31.564174513Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 00:26:31.573927 kubelet[2022]: I0513 00:26:31.573875 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbq24" podStartSLOduration=3.573859303 podStartE2EDuration="3.573859303s" podCreationTimestamp="2025-05-13 00:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:29.571743653 +0000 UTC m=+16.152451679" watchObservedRunningTime="2025-05-13 00:26:31.573859303 +0000 UTC m=+18.154567329" May 13 00:26:32.811996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870503138.mount: Deactivated successfully. May 13 00:26:33.587796 env[1213]: time="2025-05-13T00:26:33.583257461Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:33.587796 env[1213]: time="2025-05-13T00:26:33.584836427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:33.587796 env[1213]: time="2025-05-13T00:26:33.587555261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:33.593882 env[1213]: time="2025-05-13T00:26:33.593851843Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:26:33.599808 env[1213]: time="2025-05-13T00:26:33.599769295Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 13 00:26:33.603502 env[1213]: time="2025-05-13T00:26:33.602719880Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:26:33.614170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572281681.mount: Deactivated successfully. May 13 00:26:33.618034 env[1213]: time="2025-05-13T00:26:33.617872457Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a\"" May 13 00:26:33.618395 env[1213]: time="2025-05-13T00:26:33.618356960Z" level=info msg="StartContainer for \"ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a\"" May 13 00:26:33.634360 systemd[1]: Started cri-containerd-ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a.scope. May 13 00:26:33.687140 env[1213]: time="2025-05-13T00:26:33.687087927Z" level=info msg="StartContainer for \"ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a\" returns successfully" May 13 00:26:33.687263 systemd[1]: cri-containerd-ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a.scope: Deactivated successfully. May 13 00:26:33.775556 kubelet[2022]: I0513 00:26:33.774573 2022 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:26:33.805687 env[1213]: time="2025-05-13T00:26:33.805640834Z" level=info msg="shim disconnected" id=ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a May 13 00:26:33.805877 env[1213]: time="2025-05-13T00:26:33.805859663Z" level=warning msg="cleaning up after shim disconnected" id=ce30c4266f290480424a637d7944e62d115ea8812bd9671c9f771d45079c085a namespace=k8s.io May 13 00:26:33.805936 env[1213]: time="2025-05-13T00:26:33.805923591Z" level=info msg="cleaning up dead shim" May 13 00:26:33.816017 env[1213]: time="2025-05-13T00:26:33.815964141Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:26:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2439 runtime=io.containerd.runc.v2\n" May 13 00:26:33.819737 kubelet[2022]: I0513 00:26:33.817372 2022 topology_manager.go:215] "Topology Admit Handler" podUID="755096c0-f8fd-402a-9c1c-b2dac1a14e71" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s7vbl" May 13 00:26:33.819737 kubelet[2022]: I0513 00:26:33.818996 2022 topology_manager.go:215] "Topology Admit Handler" podUID="91ca7c68-9f5f-4830-89a6-987408db5a1e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nsngn" May 13 00:26:33.824965 systemd[1]: Created slice kubepods-burstable-pod755096c0_f8fd_402a_9c1c_b2dac1a14e71.slice. May 13 00:26:33.829876 systemd[1]: Created slice kubepods-burstable-pod91ca7c68_9f5f_4830_89a6_987408db5a1e.slice. May 13 00:26:33.935568 kubelet[2022]: I0513 00:26:33.935429 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97vf\" (UniqueName: \"kubernetes.io/projected/755096c0-f8fd-402a-9c1c-b2dac1a14e71-kube-api-access-c97vf\") pod \"coredns-7db6d8ff4d-s7vbl\" (UID: \"755096c0-f8fd-402a-9c1c-b2dac1a14e71\") " pod="kube-system/coredns-7db6d8ff4d-s7vbl" May 13 00:26:33.935568 kubelet[2022]: I0513 00:26:33.935482 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91ca7c68-9f5f-4830-89a6-987408db5a1e-config-volume\") pod \"coredns-7db6d8ff4d-nsngn\" (UID: \"91ca7c68-9f5f-4830-89a6-987408db5a1e\") " pod="kube-system/coredns-7db6d8ff4d-nsngn" May 13 00:26:33.935568 kubelet[2022]: I0513 00:26:33.935500 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j94f\" (UniqueName: \"kubernetes.io/projected/91ca7c68-9f5f-4830-89a6-987408db5a1e-kube-api-access-5j94f\") pod \"coredns-7db6d8ff4d-nsngn\" (UID: \"91ca7c68-9f5f-4830-89a6-987408db5a1e\") " pod="kube-system/coredns-7db6d8ff4d-nsngn" May 13 00:26:33.935568 kubelet[2022]: I0513 00:26:33.935517 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/755096c0-f8fd-402a-9c1c-b2dac1a14e71-config-volume\") pod \"coredns-7db6d8ff4d-s7vbl\" (UID: \"755096c0-f8fd-402a-9c1c-b2dac1a14e71\") " pod="kube-system/coredns-7db6d8ff4d-s7vbl" May 13 00:26:34.127931 kubelet[2022]: E0513 00:26:34.127880 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:34.128876 env[1213]: time="2025-05-13T00:26:34.128494884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7vbl,Uid:755096c0-f8fd-402a-9c1c-b2dac1a14e71,Namespace:kube-system,Attempt:0,}" May 13 00:26:34.136348 kubelet[2022]: E0513 00:26:34.136300 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:34.136945 env[1213]: time="2025-05-13T00:26:34.136910972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsngn,Uid:91ca7c68-9f5f-4830-89a6-987408db5a1e,Namespace:kube-system,Attempt:0,}" May 13 00:26:34.161315 env[1213]: time="2025-05-13T00:26:34.161245042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7vbl,Uid:755096c0-f8fd-402a-9c1c-b2dac1a14e71,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:26:34.161678 kubelet[2022]: E0513 00:26:34.161643 2022 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:26:34.161735 kubelet[2022]: E0513 00:26:34.161703 2022 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s7vbl" May 13 00:26:34.161735 kubelet[2022]: E0513 00:26:34.161723 2022 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s7vbl" May 13 00:26:34.161790 kubelet[2022]: E0513 00:26:34.161762 2022 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s7vbl_kube-system(755096c0-f8fd-402a-9c1c-b2dac1a14e71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s7vbl_kube-system(755096c0-f8fd-402a-9c1c-b2dac1a14e71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-s7vbl" podUID="755096c0-f8fd-402a-9c1c-b2dac1a14e71" May 13 00:26:34.163204 env[1213]: time="2025-05-13T00:26:34.163123196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsngn,Uid:91ca7c68-9f5f-4830-89a6-987408db5a1e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9938d31e3c2e16f996739f854c63d99eea838a7c9d5442a9c9a59b14b8d97f41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:26:34.163366 kubelet[2022]: E0513 00:26:34.163331 2022 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9938d31e3c2e16f996739f854c63d99eea838a7c9d5442a9c9a59b14b8d97f41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 00:26:34.163410 kubelet[2022]: E0513 00:26:34.163375 2022 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9938d31e3c2e16f996739f854c63d99eea838a7c9d5442a9c9a59b14b8d97f41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nsngn" May 13 00:26:34.163410 kubelet[2022]: E0513 00:26:34.163390 2022 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9938d31e3c2e16f996739f854c63d99eea838a7c9d5442a9c9a59b14b8d97f41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nsngn" May 13 00:26:34.163465 kubelet[2022]: E0513 00:26:34.163418 2022 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsngn_kube-system(91ca7c68-9f5f-4830-89a6-987408db5a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsngn_kube-system(91ca7c68-9f5f-4830-89a6-987408db5a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9938d31e3c2e16f996739f854c63d99eea838a7c9d5442a9c9a59b14b8d97f41\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-nsngn" podUID="91ca7c68-9f5f-4830-89a6-987408db5a1e" May 13 00:26:34.569133 kubelet[2022]: E0513 00:26:34.568288 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:34.576392 env[1213]: time="2025-05-13T00:26:34.576354938Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 00:26:34.586710 env[1213]: time="2025-05-13T00:26:34.586663502Z" level=info msg="CreateContainer within sandbox \"04789c2feac99eb7086755a60a24d4af36484529f9c85b6e5bc87f3bd8c87af7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f024840cd959ad917da5dfd5c4f80fa4dbb76cfe9ac9c5f810c98f46cced8144\"" May 13 00:26:34.587466 env[1213]: time="2025-05-13T00:26:34.587341106Z" level=info msg="StartContainer for \"f024840cd959ad917da5dfd5c4f80fa4dbb76cfe9ac9c5f810c98f46cced8144\"" May 13 00:26:34.603260 systemd[1]: Started cri-containerd-f024840cd959ad917da5dfd5c4f80fa4dbb76cfe9ac9c5f810c98f46cced8144.scope. May 13 00:26:34.669400 env[1213]: time="2025-05-13T00:26:34.669346238Z" level=info msg="StartContainer for \"f024840cd959ad917da5dfd5c4f80fa4dbb76cfe9ac9c5f810c98f46cced8144\" returns successfully" May 13 00:26:34.711110 systemd[1]: run-netns-cni\x2da4543943\x2db692\x2db50d\x2d0904\x2dba32516e97e7.mount: Deactivated successfully. May 13 00:26:34.711203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-035751759863367e5894a2d027dd74439bbc33019b375b5e721394cfb165eab5-shm.mount: Deactivated successfully. May 13 00:26:35.571746 kubelet[2022]: E0513 00:26:35.571718 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:35.581423 kubelet[2022]: I0513 00:26:35.581374 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mfrm8" podStartSLOduration=3.344397207 podStartE2EDuration="7.581358747s" podCreationTimestamp="2025-05-13 00:26:28 +0000 UTC" firstStartedPulling="2025-05-13 00:26:29.364286068 +0000 UTC m=+15.944994094" lastFinishedPulling="2025-05-13 00:26:33.601247608 +0000 UTC m=+20.181955634" observedRunningTime="2025-05-13 00:26:35.581162323 +0000 UTC m=+22.161870349" watchObservedRunningTime="2025-05-13 00:26:35.581358747 +0000 UTC m=+22.162066773" May 13 00:26:35.740806 systemd-networkd[1042]: flannel.1: Link UP May 13 00:26:35.740812 systemd-networkd[1042]: flannel.1: Gained carrier May 13 00:26:36.573913 kubelet[2022]: E0513 00:26:36.573877 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:37.554682 systemd-networkd[1042]: flannel.1: Gained IPv6LL May 13 00:26:38.146935 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:51326.service. May 13 00:26:38.183103 sshd[2630]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:38.184432 sshd[2630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:38.188329 systemd-logind[1202]: New session 6 of user core. May 13 00:26:38.188788 systemd[1]: Started session-6.scope. May 13 00:26:38.304984 sshd[2630]: pam_unix(sshd:session): session closed for user core May 13 00:26:38.307609 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:51326.service: Deactivated successfully. May 13 00:26:38.308317 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:26:38.308833 systemd-logind[1202]: Session 6 logged out. Waiting for processes to exit. May 13 00:26:38.309648 systemd-logind[1202]: Removed session 6. May 13 00:26:43.309656 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:34934.service. May 13 00:26:43.349189 sshd[2667]: Accepted publickey for core from 10.0.0.1 port 34934 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:43.350458 sshd[2667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:43.355057 systemd[1]: Started session-7.scope. May 13 00:26:43.355420 systemd-logind[1202]: New session 7 of user core. May 13 00:26:43.467669 sshd[2667]: pam_unix(sshd:session): session closed for user core May 13 00:26:43.471067 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:34934.service: Deactivated successfully. May 13 00:26:43.471734 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:26:43.472104 systemd-logind[1202]: Session 7 logged out. Waiting for processes to exit. May 13 00:26:43.472784 systemd-logind[1202]: Removed session 7. May 13 00:26:46.516887 kubelet[2022]: E0513 00:26:46.516677 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:46.517395 env[1213]: time="2025-05-13T00:26:46.517350013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsngn,Uid:91ca7c68-9f5f-4830-89a6-987408db5a1e,Namespace:kube-system,Attempt:0,}" May 13 00:26:46.553208 systemd-networkd[1042]: cni0: Link UP May 13 00:26:46.569670 systemd-networkd[1042]: vetha80bbefe: Link UP May 13 00:26:46.571574 kernel: cni0: port 1(vetha80bbefe) entered blocking state May 13 00:26:46.571663 kernel: cni0: port 1(vetha80bbefe) entered disabled state May 13 00:26:46.572798 kernel: device vetha80bbefe entered promiscuous mode May 13 00:26:46.572884 kernel: cni0: port 1(vetha80bbefe) entered blocking state May 13 00:26:46.576598 kernel: cni0: port 1(vetha80bbefe) entered forwarding state May 13 00:26:46.578877 kernel: cni0: port 1(vetha80bbefe) entered disabled state May 13 00:26:46.588176 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha80bbefe: link becomes ready May 13 00:26:46.588266 kernel: cni0: port 1(vetha80bbefe) entered blocking state May 13 00:26:46.588285 kernel: cni0: port 1(vetha80bbefe) entered forwarding state May 13 00:26:46.589120 systemd-networkd[1042]: vetha80bbefe: Gained carrier May 13 00:26:46.589606 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cni0: link becomes ready May 13 00:26:46.590787 systemd-networkd[1042]: cni0: Gained carrier May 13 00:26:46.594833 env[1213]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018928), "name":"cbr0", "type":"bridge"} May 13 00:26:46.594833 env[1213]: delegateAdd: netconf sent to delegate plugin: May 13 00:26:46.610217 env[1213]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:26:46.610136358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:46.610360 env[1213]: time="2025-05-13T00:26:46.610221604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:46.610360 env[1213]: time="2025-05-13T00:26:46.610248486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:46.610467 env[1213]: time="2025-05-13T00:26:46.610426700Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec pid=2750 runtime=io.containerd.runc.v2 May 13 00:26:46.627896 systemd[1]: run-containerd-runc-k8s.io-7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec-runc.i613eu.mount: Deactivated successfully. May 13 00:26:46.629418 systemd[1]: Started cri-containerd-7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec.scope. May 13 00:26:46.657184 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:26:46.673571 env[1213]: time="2025-05-13T00:26:46.673531772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsngn,Uid:91ca7c68-9f5f-4830-89a6-987408db5a1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec\"" May 13 00:26:46.674445 kubelet[2022]: E0513 00:26:46.674417 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:46.680568 env[1213]: time="2025-05-13T00:26:46.680522627Z" level=info msg="CreateContainer within sandbox \"7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:26:46.691385 env[1213]: time="2025-05-13T00:26:46.691346896Z" level=info msg="CreateContainer within sandbox \"7ef176194ee3a784069bf4aa5adbe25c42b81b2c25bf832ed78c67ef6532c9ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8038e7fe1e908574243166b91f37a06c1db4912507ebffe5ca45a3b1bc659b5e\"" May 13 00:26:46.693264 env[1213]: time="2025-05-13T00:26:46.691902858Z" level=info msg="StartContainer for \"8038e7fe1e908574243166b91f37a06c1db4912507ebffe5ca45a3b1bc659b5e\"" May 13 00:26:46.705925 systemd[1]: Started cri-containerd-8038e7fe1e908574243166b91f37a06c1db4912507ebffe5ca45a3b1bc659b5e.scope. May 13 00:26:46.759987 env[1213]: time="2025-05-13T00:26:46.759914946Z" level=info msg="StartContainer for \"8038e7fe1e908574243166b91f37a06c1db4912507ebffe5ca45a3b1bc659b5e\" returns successfully" May 13 00:26:47.592091 kubelet[2022]: E0513 00:26:47.591639 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:47.614745 kubelet[2022]: I0513 00:26:47.614249 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nsngn" podStartSLOduration=18.614231867 podStartE2EDuration="18.614231867s" podCreationTimestamp="2025-05-13 00:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:47.603049919 +0000 UTC m=+34.183757945" watchObservedRunningTime="2025-05-13 00:26:47.614231867 +0000 UTC m=+34.194939853" May 13 00:26:47.794683 systemd-networkd[1042]: cni0: Gained IPv6LL May 13 00:26:48.472498 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:34942.service. May 13 00:26:48.507914 sshd[2829]: Accepted publickey for core from 10.0.0.1 port 34942 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:48.509458 sshd[2829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:48.512654 systemd-logind[1202]: New session 8 of user core. May 13 00:26:48.513508 systemd[1]: Started session-8.scope. May 13 00:26:48.517973 kubelet[2022]: E0513 00:26:48.517644 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:48.518088 env[1213]: time="2025-05-13T00:26:48.518050310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7vbl,Uid:755096c0-f8fd-402a-9c1c-b2dac1a14e71,Namespace:kube-system,Attempt:0,}" May 13 00:26:48.533523 systemd-networkd[1042]: veth859f367c: Link UP May 13 00:26:48.535544 kernel: cni0: port 2(veth859f367c) entered blocking state May 13 00:26:48.535593 kernel: cni0: port 2(veth859f367c) entered disabled state May 13 00:26:48.536565 kernel: device veth859f367c entered promiscuous mode May 13 00:26:48.541361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:26:48.541433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth859f367c: link becomes ready May 13 00:26:48.541450 kernel: cni0: port 2(veth859f367c) entered blocking state May 13 00:26:48.542815 kernel: cni0: port 2(veth859f367c) entered forwarding state May 13 00:26:48.543177 systemd-networkd[1042]: veth859f367c: Gained carrier May 13 00:26:48.545860 env[1213]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} May 13 00:26:48.545860 env[1213]: delegateAdd: netconf sent to delegate plugin: May 13 00:26:48.554772 env[1213]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T00:26:48.554707254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:26:48.554772 env[1213]: time="2025-05-13T00:26:48.554745617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:26:48.554772 env[1213]: time="2025-05-13T00:26:48.554756058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:26:48.554916 env[1213]: time="2025-05-13T00:26:48.554861585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450 pid=2869 runtime=io.containerd.runc.v2 May 13 00:26:48.562990 systemd-networkd[1042]: vetha80bbefe: Gained IPv6LL May 13 00:26:48.573903 systemd[1]: run-containerd-runc-k8s.io-bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450-runc.119hdA.mount: Deactivated successfully. May 13 00:26:48.575508 systemd[1]: Started cri-containerd-bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450.scope. May 13 00:26:48.594661 kubelet[2022]: E0513 00:26:48.594550 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:48.606356 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:26:48.623598 env[1213]: time="2025-05-13T00:26:48.623560464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7vbl,Uid:755096c0-f8fd-402a-9c1c-b2dac1a14e71,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450\"" May 13 00:26:48.624419 kubelet[2022]: E0513 00:26:48.624397 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:48.630900 env[1213]: time="2025-05-13T00:26:48.630865227Z" level=info msg="CreateContainer within sandbox \"bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:26:48.646628 env[1213]: time="2025-05-13T00:26:48.646574792Z" level=info msg="CreateContainer within sandbox \"bf11d1793a96aa4d5a4f364b4282d7f33f1b0945e49c140a8442e629dd71f450\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4181854e220e15f077c85f9345324cee9c166ca0ce4018e64e27e36c9cd06a65\"" May 13 00:26:48.647496 env[1213]: time="2025-05-13T00:26:48.647466336Z" level=info msg="StartContainer for \"4181854e220e15f077c85f9345324cee9c166ca0ce4018e64e27e36c9cd06a65\"" May 13 00:26:48.652197 sshd[2829]: pam_unix(sshd:session): session closed for user core May 13 00:26:48.655414 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:34948.service. May 13 00:26:48.655921 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:34942.service: Deactivated successfully. May 13 00:26:48.656655 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:26:48.657210 systemd-logind[1202]: Session 8 logged out. Waiting for processes to exit. May 13 00:26:48.658303 systemd-logind[1202]: Removed session 8. May 13 00:26:48.667122 systemd[1]: Started cri-containerd-4181854e220e15f077c85f9345324cee9c166ca0ce4018e64e27e36c9cd06a65.scope. May 13 00:26:48.692395 sshd[2921]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:48.693827 sshd[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:48.697455 systemd-logind[1202]: New session 9 of user core. May 13 00:26:48.698001 systemd[1]: Started session-9.scope. May 13 00:26:48.708851 env[1213]: time="2025-05-13T00:26:48.708810488Z" level=info msg="StartContainer for \"4181854e220e15f077c85f9345324cee9c166ca0ce4018e64e27e36c9cd06a65\" returns successfully" May 13 00:26:48.839044 sshd[2921]: pam_unix(sshd:session): session closed for user core May 13 00:26:48.842826 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:34954.service. May 13 00:26:48.849463 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:34948.service: Deactivated successfully. May 13 00:26:48.850190 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:26:48.855037 systemd-logind[1202]: Session 9 logged out. Waiting for processes to exit. May 13 00:26:48.856120 systemd-logind[1202]: Removed session 9. May 13 00:26:48.883807 sshd[2963]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:48.885435 sshd[2963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:48.888731 systemd-logind[1202]: New session 10 of user core. May 13 00:26:48.889661 systemd[1]: Started session-10.scope. May 13 00:26:49.005224 sshd[2963]: pam_unix(sshd:session): session closed for user core May 13 00:26:49.007780 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:34954.service: Deactivated successfully. May 13 00:26:49.008465 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:26:49.009135 systemd-logind[1202]: Session 10 logged out. Waiting for processes to exit. May 13 00:26:49.010833 systemd-logind[1202]: Removed session 10. May 13 00:26:49.597314 kubelet[2022]: E0513 00:26:49.597024 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:49.597314 kubelet[2022]: E0513 00:26:49.597103 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:49.608240 kubelet[2022]: I0513 00:26:49.608182 2022 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s7vbl" podStartSLOduration=20.608164073 podStartE2EDuration="20.608164073s" podCreationTimestamp="2025-05-13 00:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:26:49.608096789 +0000 UTC m=+36.188804815" watchObservedRunningTime="2025-05-13 00:26:49.608164073 +0000 UTC m=+36.188872099" May 13 00:26:50.226668 systemd-networkd[1042]: veth859f367c: Gained IPv6LL May 13 00:26:54.009968 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:41952.service. May 13 00:26:54.046594 sshd[3000]: Accepted publickey for core from 10.0.0.1 port 41952 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:54.048186 sshd[3000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:54.051584 systemd-logind[1202]: New session 11 of user core. May 13 00:26:54.052279 systemd[1]: Started session-11.scope. May 13 00:26:54.129092 kubelet[2022]: E0513 00:26:54.129055 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:54.191175 sshd[3000]: pam_unix(sshd:session): session closed for user core May 13 00:26:54.194208 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:41952.service: Deactivated successfully. May 13 00:26:54.194873 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:26:54.195508 systemd-logind[1202]: Session 11 logged out. Waiting for processes to exit. May 13 00:26:54.196692 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:41962.service. May 13 00:26:54.197426 systemd-logind[1202]: Removed session 11. May 13 00:26:54.234021 sshd[3018]: Accepted publickey for core from 10.0.0.1 port 41962 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:54.235248 sshd[3018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:54.239103 systemd-logind[1202]: New session 12 of user core. May 13 00:26:54.239587 systemd[1]: Started session-12.scope. May 13 00:26:54.450404 sshd[3018]: pam_unix(sshd:session): session closed for user core May 13 00:26:54.453354 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:41962.service: Deactivated successfully. May 13 00:26:54.453996 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:26:54.454583 systemd-logind[1202]: Session 12 logged out. Waiting for processes to exit. May 13 00:26:54.455733 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:41974.service. May 13 00:26:54.456608 systemd-logind[1202]: Removed session 12. May 13 00:26:54.491734 sshd[3029]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:54.493323 sshd[3029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:54.496900 systemd-logind[1202]: New session 13 of user core. May 13 00:26:54.497797 systemd[1]: Started session-13.scope. May 13 00:26:54.607173 kubelet[2022]: E0513 00:26:54.606782 2022 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:26:55.684789 sshd[3029]: pam_unix(sshd:session): session closed for user core May 13 00:26:55.694061 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:41982.service. May 13 00:26:55.695844 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:41974.service: Deactivated successfully. May 13 00:26:55.696521 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:26:55.698094 systemd-logind[1202]: Session 13 logged out. Waiting for processes to exit. May 13 00:26:55.699047 systemd-logind[1202]: Removed session 13. May 13 00:26:55.738627 sshd[3046]: Accepted publickey for core from 10.0.0.1 port 41982 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:55.739994 sshd[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:55.744330 systemd-logind[1202]: New session 14 of user core. May 13 00:26:55.744880 systemd[1]: Started session-14.scope. May 13 00:26:55.957070 sshd[3046]: pam_unix(sshd:session): session closed for user core May 13 00:26:55.961197 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:41996.service. May 13 00:26:55.963449 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:41982.service: Deactivated successfully. May 13 00:26:55.964286 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:26:55.965044 systemd-logind[1202]: Session 14 logged out. Waiting for processes to exit. May 13 00:26:55.966018 systemd-logind[1202]: Removed session 14. May 13 00:26:55.997175 sshd[3080]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:26:55.998717 sshd[3080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:26:56.002126 systemd-logind[1202]: New session 15 of user core. May 13 00:26:56.003248 systemd[1]: Started session-15.scope. May 13 00:26:56.112655 sshd[3080]: pam_unix(sshd:session): session closed for user core May 13 00:26:56.115094 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:41996.service: Deactivated successfully. May 13 00:26:56.115799 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:26:56.116372 systemd-logind[1202]: Session 15 logged out. Waiting for processes to exit. May 13 00:26:56.117101 systemd-logind[1202]: Removed session 15. May 13 00:27:01.117893 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:41998.service. May 13 00:27:01.160856 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 41998 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:27:01.162219 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:01.165646 systemd-logind[1202]: New session 16 of user core. May 13 00:27:01.166614 systemd[1]: Started session-16.scope. May 13 00:27:01.267272 sshd[3121]: pam_unix(sshd:session): session closed for user core May 13 00:27:01.269817 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:41998.service: Deactivated successfully. May 13 00:27:01.270504 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:27:01.271008 systemd-logind[1202]: Session 16 logged out. Waiting for processes to exit. May 13 00:27:01.271601 systemd-logind[1202]: Removed session 16. May 13 00:27:06.272350 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:43216.service. May 13 00:27:06.313914 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 43216 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:27:06.315362 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:06.319496 systemd-logind[1202]: New session 17 of user core. May 13 00:27:06.320032 systemd[1]: Started session-17.scope. May 13 00:27:06.430473 sshd[3157]: pam_unix(sshd:session): session closed for user core May 13 00:27:06.432749 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:43216.service: Deactivated successfully. May 13 00:27:06.433466 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:27:06.434000 systemd-logind[1202]: Session 17 logged out. Waiting for processes to exit. May 13 00:27:06.434719 systemd-logind[1202]: Removed session 17. May 13 00:27:11.433916 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:43232.service. May 13 00:27:11.473901 sshd[3191]: Accepted publickey for core from 10.0.0.1 port 43232 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:27:11.475634 sshd[3191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:11.481663 systemd-logind[1202]: New session 18 of user core. May 13 00:27:11.481890 systemd[1]: Started session-18.scope. May 13 00:27:11.605037 sshd[3191]: pam_unix(sshd:session): session closed for user core May 13 00:27:11.607762 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:43232.service: Deactivated successfully. May 13 00:27:11.608482 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:27:11.609599 systemd-logind[1202]: Session 18 logged out. Waiting for processes to exit. May 13 00:27:11.610265 systemd-logind[1202]: Removed session 18. May 13 00:27:16.609618 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:49444.service. May 13 00:27:16.648855 sshd[3227]: Accepted publickey for core from 10.0.0.1 port 49444 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:27:16.650274 sshd[3227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:16.654619 systemd-logind[1202]: New session 19 of user core. May 13 00:27:16.655014 systemd[1]: Started session-19.scope. May 13 00:27:16.772689 sshd[3227]: pam_unix(sshd:session): session closed for user core May 13 00:27:16.777046 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:49444.service: Deactivated successfully. May 13 00:27:16.777777 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:27:16.778281 systemd-logind[1202]: Session 19 logged out. Waiting for processes to exit. May 13 00:27:16.778885 systemd-logind[1202]: Removed session 19.